Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Top 10 Use Cases of Vibe Coding in Large-Scale Node.js Applications

      September 3, 2025

      Cloudsmith launches ML Model Registry to provide a single source of truth for AI models and datasets

      September 3, 2025

      Kong Acquires OpenMeter to Unlock AI and API Monetization for the Agentic Era

      September 3, 2025

      Microsoft Graph CLI to be retired

      September 2, 2025

      ‘Cronos: The New Dawn’ was by far my favorite experience at Gamescom 2025 — Bloober might have cooked an Xbox / PC horror masterpiece

      September 4, 2025

      ASUS built a desktop gaming PC around a mobile CPU — it’s an interesting, if flawed, idea

      September 4, 2025

      Hollow Knight: Silksong arrives on Xbox Game Pass this week — and Xbox’s September 1–7 lineup also packs in the horror. Here’s every new game.

      September 4, 2025

      The Xbox remaster that brought Gears to PlayStation just passed a huge milestone — “ending the console war” and proving the series still has serious pulling power

      September 4, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Magento (Adobe Commerce) or Optimizely Configured Commerce: Which One to Choose

      September 4, 2025
      Recent

      Magento (Adobe Commerce) or Optimizely Configured Commerce: Which One to Choose

      September 4, 2025

      Updates from N|Solid Runtime: The Best Open-Source Node.js RT Just Got Better

      September 3, 2025

      Scale Your Business with AI-Powered Solutions Built for Singapore’s Digital Economy

      September 3, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      ‘Cronos: The New Dawn’ was by far my favorite experience at Gamescom 2025 — Bloober might have cooked an Xbox / PC horror masterpiece

      September 4, 2025
      Recent

      ‘Cronos: The New Dawn’ was by far my favorite experience at Gamescom 2025 — Bloober might have cooked an Xbox / PC horror masterpiece

      September 4, 2025

      ASUS built a desktop gaming PC around a mobile CPU — it’s an interesting, if flawed, idea

      September 4, 2025

      Hollow Knight: Silksong arrives on Xbox Game Pass this week — and Xbox’s September 1–7 lineup also packs in the horror. Here’s every new game.

      September 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Foundation Models No Longer Need Prompts or Labels: EPFL Researchers Introduce a Joint Inference Framework for Fully Unsupervised Adaptation Using Fine-Tuning and In-Context Learning

    Foundation Models No Longer Need Prompts or Labels: EPFL Researchers Introduce a Joint Inference Framework for Fully Unsupervised Adaptation Using Fine-Tuning and In-Context Learning

    April 14, 2025

    Foundation models, often massive neural networks trained on extensive text and image data, have significantly shifted how artificial intelligence systems handle language and vision tasks. These models are not designed for a single task but generalize across a wide variety by leveraging their pretraining knowledge. Once trained, they can generate coherent responses, classify images, or solve problems without needing new task-specific training. Their scalability and reuse across domains make them a cornerstone of AI development.

    Despite their broad capabilities, a persistent issue lies in how these models are adapted for new, unseen tasks. In most scenarios, achieving strong performance requires providing them with handcrafted prompts or labeled examples that guide the model on how to behave. This process, however, introduces overhead, as crafting prompts involves trial and error, and collecting labeled examples can be expensive and time-consuming. Moreover, in real-world applications, such support data may not always be readily available, limiting the usability of foundation models in zero-shot settings.

    Several strategies have been used to bridge this gap between generality and task-specific performance. In-context learning enables models to mimic a task by including example input-output pairs during inference, while supervised fine-tuning adjusts model weights using labeled data. Another method, prompt engineering, involves crafting prompts that steer the model toward desired outputs. Though these tools have been successful in boosting performance, each relies on external support—either human input or labeled data—making them less viable in completely unsupervised settings.

    Swiss Federal Institute of Technology Lausanne (EPFL) researchers introduced a joint inference framework that supports unsupervised adaptation. This framework enables foundation models to perform coordinated predictions over multiple inputs without requiring ground truth data or manual prompts. The research team presented two specific techniques under this framework: unsupervised fine-tuning and unsupervised in-context learning. These methods allow models, including closed-weight ones like GPT-4, to improve accuracy without external guidance.

    The approach of unsupervised fine-tuning works by letting the model iteratively improve its predictions using only its feedback. It formulates an optimization objective where predictions for a batch of inputs are generated together, and their joint probability is maximized. This method uses LoRA (Low-Rank Adaptation) for efficient weight updates and introduces a regularization step to avoid trivial solutions, such as predicting the same answer for all inputs. The researchers developed unsupervised in-context learning for situations where weight access isn’t available, such as with GPT-4. This method mimics the effect of labeled ICL by using previously generated outputs as pseudo-labels, refining predictions over multiple iterations without human annotations. Each iteration involves conditioning the model on prior examples and developing a more accurate answer, simulating a supervised learning loop through self-generated data.

    The performance improvements from these unsupervised methods were substantial. On the GSM8K dataset, designed for math reasoning, unsupervised ICL applied to the Qwen2.5-Math model achieved a 39.2% absolute improvement over the standard zero-shot baseline. Similarly, for the Llama-3.1-8B model tested across 13 natural language processing tasks, unsupervised fine-tuning delivered a 23% average gain in accuracy. It matched the performance of fully supervised fine-tuning in 6 out of the 13 tasks. In vision-language tasks, unsupervised ICL also demonstrated strong results—showing a 23% gain on the Food101 dataset and significant improvements across other benchmarks. The research even extended to GPT-4o, a closed-weight model, where a 3% improvement was observed on ImageNet, reinforcing the framework’s versatility.

    This work reveals a meaningful shift in how foundation models can adapt. The researchers successfully addressed the core limitation—reliance on labeled data and manual configuration—by introducing a robust and scalable self-supervised strategy. Their joint inference framework is a practical, generalizable approach that redefines the boundaries of unsupervised learning for large-scale AI models.


    Check out Paper and Project. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    The post Foundation Models No Longer Need Prompts or Labels: EPFL Researchers Introduce a Joint Inference Framework for Fully Unsupervised Adaptation Using Fine-Tuning and In-Context Learning appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleUnderdamped Diffusion Samplers Outperform Traditional Methods: Researchers from Karlsruhe Institute of Technology, NVIDIA, and Zuse Institute Berlin Introduce a New Framework for Efficient Sampling from Complex Distributions with Degenerate Noise
    Next Article Flutter vs React Native for Mobile Apps: What Laravel Devs Say in 2025

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-5233 – WordPress Color Palette Stored Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    The AI Fix #54: Will AI collapse under its own garbage, and AI charity “Hunger Games”

    Development

    CVE-2022-50232 – Linux Kernel ARM64 UXN Set Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Microsoft’s patch for CVE-2025–21204 symlink vulnerability introduces another symlink vulnerability

    Security

    Highlights

    CVE-2025-26997 – Validas Wireless Butler Cross-site Scripting

    May 19, 2025

    CVE ID : CVE-2025-26997

    Published : May 19, 2025, 6:15 p.m. | 33 minutes ago

    Description : Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’) vulnerability in validas Wireless Butler allows Reflected XSS.This issue affects Wireless Butler: from n/a through 1.0.11.

    Severity: 7.1 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Distribution Release: T2 SDE 25.4

    April 14, 2025

    Blockchain in Space Exploration: Securing Data for the Final Frontier 🚀🔗

    April 7, 2025

    Windows 11’s New Adaptive Energy Saver Now in Preview for Laptops and Tablets

    July 15, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.