Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Double-Edged Sustainability Sword Of AI In Web Design

      August 20, 2025

      Top 12 Reasons Enterprises Choose Node.js Development Services for Scalable Growth

      August 20, 2025

      GitHub’s coding agent can now be launched from anywhere on platform using new Agents panel

      August 20, 2025

      Stop writing tests: Automate fully with Generative AI

      August 19, 2025

      I’m a diehard Pixel fan, but I’m not upgrading to the Pixel 10. Here’s why

      August 21, 2025

      Google Pixel Watch 4 vs. Samsung Galaxy Watch 8: I compared the two best Androids, and here’s the winner

      August 21, 2025

      Get a free Amazon gift card up to $300 when you preorder a new Google Pixel 10 phone – here’s how

      August 21, 2025

      Everything announced at Made by Google 2025: Pixel 10 Pro, Fold, Watch 4, and more

      August 21, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Copy Errors as Markdown to Share With AI in Laravel 12.25

      August 21, 2025
      Recent

      Copy Errors as Markdown to Share With AI in Laravel 12.25

      August 21, 2025

      Deconstructing the Request Lifecycle in Sitecore Headless – Part 2: SSG and ISR Modes in Next.js

      August 20, 2025

      Susan Etlinger, AI Analyst and Industry Watcher on Building Trust

      August 20, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      TerraMaster D1 SSD Plus Review: Experience a Faster External SSD

      August 20, 2025
      Recent

      TerraMaster D1 SSD Plus Review: Experience a Faster External SSD

      August 20, 2025

      Microsoft is investigating Windows 11 KB5063878 SSD data corruption/failure issue

      August 20, 2025

      Microsoft Surface Won’t Turn On: 6 Tested Solutions to Fix

      August 20, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Enhancing Language Model Generalization: Bridging the Gap Between In-Context Learning and Fine-Tuning

    Enhancing Language Model Generalization: Bridging the Gap Between In-Context Learning and Fine-Tuning

    May 20, 2025

    Language models (LMs) have great capabilities as in-context learners when pretrained on vast internet text corpora, allowing them to generalize effectively from just a few task examples. However, fine-tuning these models for downstream tasks presents significant challenges. While fine-tuning requires hundreds to thousands of examples, the resulting generalization patterns show limitations. For example, models fine-tuned on statements like “B’s mother is A” struggle to answer related questions like “Who is A’s son?” However, the LMs can handle such reverse relations in context. This raises questions about the differences between in-context learning and fine-tuning generalization patterns, and how these differences should inform adaptation strategies for downstream tasks.

    Research into improving LMs’ adaptability has followed several key approaches. In-context learning studies have examined learning and generalization patterns through empirical, mechanistic, and theoretical analyses. Out-of-context learning research explores how models utilize information not explicitly included in prompts. Data augmentation techniques use LLMs to enhance performance from limited datasets, with specific solutions targeting issues like the reversal curse through hardcoded augmentations, deductive closure training, and generating reasoning pathways. Moreover, synthetic data approaches have evolved from early hand-designed data to improve generalization in domains like linguistics or mathematics to more recent methods that generate data directly from language models.

    Researchers from Google DeepMind and Stanford University have constructed several datasets that isolate knowledge from pretraining data to create clean generalization tests. Performance is evaluated across various generalization types by exposing pretrained models to controlled information subsets, both in-context and through fine-tuning. Their findings reveal that in-context learning shows more flexible generalization than fine-tuning in data-matched settings, though there are some exceptions where fine-tuning can generalize to reversals within larger knowledge structures. Building on these insights, researchers have developed a method that enhances fine-tuning generalization by including in-context inferences into the fine-tuning data.

    Researchers employ multiple datasets carefully designed to isolate specific generalization challenges or insert them within broader learning contexts. Evaluation relies on multiple-choice likelihood scoring without providing answer choices in context. The experiments involve fine-tuning Gemini 1.5 Flash using batch sizes of 8 or 16. For in-context evaluation, the researchers combine training documents as context for the instruction-tuned model, randomly subsampling by 8x for larger datasets to minimize interference issues. The key innovation is a dataset augmentation approach using in-context generalization to enhance fine-tuning dataset coverage. This includes local and global strategies, each employing distinct contexts and prompts.

    On the Reversal Curse dataset, in-context learning achieves near-ceiling performance on reversals, while conventional fine-tuning shows near-zero accuracy as models favor incorrect celebrity names seen during training. Fine-tuning with data augmented by in-context inferences matches the high performance of pure in-context learning. Testing on simple nonsense reversals reveals similar patterns, though with less pronounced benefits. For simple syllogisms, while the pretrained model performs at chance level (indicating no data contamination), fine-tuning does produce above-chance generalization for certain syllogism types where logical inferences align with simple linguistic patterns. However, in-context learning outperforms fine-tuning, with augmented fine-tuning showing the best overall results.

    In conclusion, this paper explores generalization differences between in-context learning and fine-tuning when LMs face novel information structures. Results show in-context learning’s superior generalization for certain inference types, prompting the researchers to develop methods that enhance fine-tuning performance by incorporating in-context inferences into training data. Despite promising outcomes, several limitations affect the study. The first one is the dependency on nonsense words and implausible operations. Second, the research focuses on specific LMs, limiting the results’ generality. Future research should investigate learning and generalization differences across various models to expand upon these findings, especially newer reasoning models.


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post Enhancing Language Model Generalization: Bridging the Gap Between In-Context Learning and Fine-Tuning appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleAutomating complex document processing: How Onity Group built an intelligent solution using Amazon Bedrock
    Next Article Researchers from Renmin University and Huawei Propose MemEngine: A Unified Modular AI Library for Customizing Memory in LLM-Based Agents

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 21, 2025
    Machine Learning

    Enhance AI agents using predictive ML models with Amazon SageMaker AI and Model Context Protocol (MCP)

    August 21, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Design system annotations, part 2: Advanced methods of annotating components

    News & Updates

    CVE-2025-33024 – RUGGEDCOM ROX Command Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    GTA 6 may require you to verify age, but only in its online version

    Operating Systems

    Apple Machine Learning Research at ICML 2025

    Machine Learning

    Highlights

    CVE-2025-6474 – Code-projects Inventory Management System SQL Injection Vulnerability

    June 22, 2025

    CVE ID : CVE-2025-6474

    Published : June 22, 2025, 12:15 p.m. | 1 hour, 28 minutes ago

    Description : A vulnerability has been found in code-projects Inventory Management System 1.0 and classified as critical. This vulnerability affects unknown code of the file /changeUsername.php. The manipulation of the argument user_id leads to sql injection. The attack can be initiated remotely. The exploit has been disclosed to the public and may be used.

    Severity: 7.3 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-3766 – WordPress Login Lockdown & Protection Unauthorized Nonce Access Vulnerability

    May 7, 2025

    Minecraft Vibrant Visuals finally has a release date and it’s dropping with the Happy Ghasts

    June 13, 2025

    Two Mirai Botnets, Lzrd and Resgod Spotted Exploiting Wazuh Flaw

    June 10, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.