Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Agent Mode for Gemini added to Android Studio

      June 24, 2025

      Google’s Agent2Agent protocol finds new home at the Linux Foundation

      June 23, 2025

      Decoding The SVG path Element: Curve And Arc Commands

      June 23, 2025

      This week in AI dev tools: Gemini 2.5 Pro and Flash GA, GitHub Copilot Spaces, and more (June 20, 2025)

      June 20, 2025

      Microsoft is reportedly planning yet more major cuts at Xbox — as early as next week

      June 24, 2025

      Microsoft makes Windows 10 security updates FREE for an extra year — but there’s a catch, and you might not like it

      June 24, 2025

      “Deus Ex” just turned 25 years old and it’s still the best PC game of all time — you only need $2 to play it on practically anything

      June 24, 2025

      Where to buy a Meta Quest 3S Xbox Edition — and why it’s a better bargain than the “normal” Meta Quest 3S

      June 24, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Vite 7.0 Is Out

      June 24, 2025
      Recent

      Vite 7.0 Is Out

      June 24, 2025

      Exploring JavaScript ES2025 Edition

      June 24, 2025

      Mastering Mixed DML Operations in Apex

      June 24, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft is reportedly planning yet more major cuts at Xbox — as early as next week

      June 24, 2025
      Recent

      Microsoft is reportedly planning yet more major cuts at Xbox — as early as next week

      June 24, 2025

      Microsoft makes Windows 10 security updates FREE for an extra year — but there’s a catch, and you might not like it

      June 24, 2025

      “Deus Ex” just turned 25 years old and it’s still the best PC game of all time — you only need $2 to play it on practically anything

      June 24, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Why Generalization in Flow Matching Models Comes from Approximation, Not Stochasticity

    Why Generalization in Flow Matching Models Comes from Approximation, Not Stochasticity

    June 21, 2025

    Introduction: Understanding Generalization in Deep Generative Models

    Deep generative models, including diffusion and flow matching, have shown outstanding performance in synthesizing realistic multi-modal content across images, audio, video, and text. However, the generalization capabilities and underlying mechanisms of these models are challenging in deep generative modeling. The core challenge includes understanding whether generative models truly generalize or simply memorize training data. Current research reveals conflicting evidence: some studies show that large diffusion models memorize individual samples from training sets, while others show clear signs of generalization when trained on large datasets. This contradiction points to a sharp phase transition between memorization and generalization.

    Existing Literature on Flow Matching and Generalization Mechanisms

    Existing research includes the utilization of closed-form solutions, studying memorization versus generalization, and characterizing different phases of generating dynamics. Methods like closed-form velocity field regression and a smoothed version of optimal velocity generation have been proposed. Studies on memorization relate the transition to generalization with training dataset size through geometric interpretations, while others focus on stochasticity in target objectives. Temporal regime analysis identifies distinct phases in generative dynamics, which show reliance on dimension and sample numbers. But validation methods depend on backward process stochasticity, which doesn’t apply to flow matching models, leaving significant gaps in understanding.

    New Findings: Early Trajectory Failures Drive Generalization

    Researchers from Université Jean Monnet Saint-Etienne and Université Claude Bernard Lyon provide an answer to whether training on noisy or stochastic targets improves flow matching generalization and identify the main sources of generalization. The method reveals that generalization emerges when limited-capacity neural networks fail to approximate the exact velocity field during critical time intervals at early and late phases. The researchers identify that generalization arises mainly early along flow matching trajectories, corresponding to the transition from stochastic to deterministic behaviour. Moreover, they propose a learning algorithm that explicitly regresses against the exact velocity field, showing enhanced generalization capabilities on standard image datasets.

    Investigating the Sources of Generalization in Flow Matching

    Researchers investigate the key sources of generalization. First, they challenge target stochasticity assumptions by using closed-form optimal velocity field formulations, showing that after small time values, the weighted average of conditional flow matching targets equals single expectation values. Second, they analyze the approximate quality between learned velocity fields and optimal velocity fields through systematic experiments on subsampled CIFAR-10 datasets ranging from 10 to 10,000 samples. Third, they construct hybrid models using piecewise trajectories governed by optimal velocity fields for early time intervals and learned velocity fields for later intervals, with adjustable threshold parameters to determine critical periods.

    Empirical Flow Matching: A Learning Algorithm for Deterministic Targets

    Researchers implement a learning algorithm that regresses against more deterministic targets using closed-form formulas. It compares vanilla conditional flow matching, optimal transport flow matching, and empirical flow matching across CIFAR-10 and CelebA datasets using multiple samples to estimate empirical means. Moreover, evaluation metrics include Fréchet Inception Distance with Inception-V3 and DINOv2 embeddings for a less biased assessment. The computational architecture operates with complexity O(M × |B| × d). Training configurations demonstrate that increasing sample numbers M for empirical mean computation creates less stochastic targets, leading to more stable performance improvements with modest computational overhead when M equals the batch size.

    Conclusion: Velocity Field Approximation as the Core of Generalization

    In this paper, researchers challenge the assumption that stochasticity in loss functions drives generalization in flow matching models, clarifying the critical role of exact velocity field approximation instead. While research provides empirical insights into practical learned models, precise characterization of learned velocity fields outside optimal trajectories remains an open challenge, suggesting future work to use architectural inductive biases. The broader implications include concerns about potential misuse of improved generative models for creating deepfakes, privacy violations, and synthetic content generation. So, it is necessary to give careful consideration to ethical applications.

    Why This Research Matters?

    This research is significant because it challenges a prevailing assumption in generative modeling—that stochasticity in training objectives is a key driver of generalization in flow matching models. By demonstrating that generalization instead arises from the failure of neural networks to precisely approximate the closed-form velocity field, especially during early trajectory phases, the study reframes our understanding of what enables models to produce novel data. This insight has direct implications for designing more efficient and interpretable generative systems, reducing computational overhead while maintaining or even enhancing generalization. It also informs better training protocols that avoid unnecessary stochasticity, improving reliability and reproducibility in real-world applications.


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post Why Generalization in Flow Matching Models Comes from Approximation, Not Stochasticity appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleBuilding Event-Driven AI Agents with UAgents and Google Gemini: A Modular Python Implementation Guide
    Next Article Busca Maquinaria

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 24, 2025
    Machine Learning

    NewDay builds A Generative AI based Customer service Agent Assist with over 90% accuracy

    June 24, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-3828 – PHPGurukul Men Salon Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    How to Create Models in Your Django Project

    Development

    Understanding the faulty proteins linked to cancer and autism

    Artificial Intelligence

    CVE-2025-4304 – PHPGurukul Cyber Cafe Management System SQL Injection

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    quickenv is an unobtrusive environment manager

    April 7, 2025

    quickenv is a replacement for direnv. It works with existing .envrcs, and as such is…

    How to Use python-A2A to Create and Connect Financial Agents with Google’s Agent-to-Agent (A2A) Protocol

    June 17, 2025

    CVE-2025-4221 – WordPress Animated Buttons Stored Cross-Site Scripting Vulnerability

    May 21, 2025

    CVE-2025-4083 – Firefox JavaScript URI Isolation Bypass

    April 29, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.