Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Microsoft donates DocumentDB to the Linux Foundation

      August 25, 2025

      A Week In The Life Of An AI-Augmented Designer

      August 22, 2025

      This week in AI updates: Gemini Code Assist Agent Mode, GitHub’s Agents panel, and more (August 22, 2025)

      August 22, 2025

      Microsoft adds Copilot-powered debugging features for .NET in Visual Studio

      August 21, 2025

      ChatGPT is reportedly scraping Google Search data to answer your questions – here’s how

      August 26, 2025

      The 10 best early Labor Day deals live now: Save on Apple, Samsung and more

      August 26, 2025

      5 rumored Apple iPhone Fold features that have me excited (and frustrated at the same time)

      August 26, 2025

      Forget plug-and-play AI: Here’s what successful AI projects do differently

      August 26, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Log Outgoing HTTP Requests with the Laravel Spy Package

      August 26, 2025
      Recent

      Log Outgoing HTTP Requests with the Laravel Spy Package

      August 26, 2025

      devdojo/auth

      August 26, 2025

      Rust Slices: Cutting Into References the Safe Way

      August 26, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Best AI Girlfriend Simulator [2025 Working Apps and Websites]

      August 25, 2025
      Recent

      Best AI Girlfriend Simulator [2025 Working Apps and Websites]

      August 25, 2025

      8 Best Paid and Free AI Sexting Chat Apps in 2025

      August 25, 2025

      Best AI Anime Art Generator: 7 Best to Use [Free & Premium]

      August 25, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Sampling Without Data is Now Scalable: Meta AI Releases Adjoint Sampling for Reward-Driven Generative Modeling

    Sampling Without Data is Now Scalable: Meta AI Releases Adjoint Sampling for Reward-Driven Generative Modeling

    May 21, 2025

    Data Scarcity in Generative Modeling

    Generative models traditionally rely on large, high-quality datasets to produce samples that replicate the underlying data distribution. However, in fields like molecular modeling or physics-based inference, acquiring such data can be computationally infeasible or even impossible. Instead of labeled data, only a scalar reward—typically derived from a complex energy function—is available to judge the quality of generated samples. This presents a significant challenge: how can one train generative models effectively without direct supervision from data?

    Meta AI Introduces Adjoint Sampling, a New Learning Algorithm Based on Scalar Rewards

    Meta AI tackles this challenge with Adjoint Sampling, a novel learning algorithm designed for training generative models using only scalar reward signals. Built on the theoretical framework of stochastic optimal control (SOC), Adjoint Sampling reframes the training process as an optimization task over a controlled diffusion process. Unlike standard generative models, it does not require explicit data. Instead, it learns to generate high-quality samples by iteratively refining them using a reward function—often derived from physical or chemical energy models.

    Adjoint Sampling excels in scenarios where only an unnormalized energy function is accessible. It produces samples that align with the target distribution defined by this energy, bypassing the need for corrective methods like importance sampling or MCMC, which are computationally intensive.

    Source: https://arxiv.org/abs/2504.11713

    Technical Details

    The foundation of Adjoint Sampling is a stochastic differential equation (SDE) that models how sample trajectories evolve. The algorithm learns a control drift u(x,t)u(x, t)u(x,t) such that the final state of these trajectories approximates a desired distribution (e.g., Boltzmann). A key innovation is its use of Reciprocal Adjoint Matching (RAM)—a loss function that enables gradient-based updates using only the initial and final states of sample trajectories. This sidesteps the need to backpropagate through the entire diffusion path, greatly improving computational efficiency.

    By sampling from a known base process and conditioning on terminal states, Adjoint Sampling constructs a replay buffer of samples and gradients, allowing multiple optimization steps per sample. This on-policy training method provides scalability unmatched by previous approaches, making it suitable for high-dimensional problems like molecular conformer generation.

    Moreover, Adjoint Sampling supports geometric symmetries and periodic boundary conditions, enabling models to respect molecular invariances like rotation, translation, and torsion. These features are crucial for physically meaningful generative tasks in chemistry and physics.

    Performance Insights and Benchmark Results

    Adjoint Sampling achieves state-of-the-art results in both synthetic and real-world tasks. On synthetic benchmarks such as the Double-Well (DW-4), Lennard-Jones (LJ-13 and LJ-55) potentials, it significantly outperforms baselines like DDS and PIS, especially in energy efficiency. For example, where DDS and PIS require 1000 evaluations per gradient update, Adjoint Sampling only uses three, with similar or better performance in Wasserstein distance and effective sample size (ESS).

    In a practical setting, the algorithm was evaluated on large-scale molecular conformer generation using the eSEN energy model trained on the SPICE-MACE-OFF dataset. Adjoint Sampling, especially its Cartesian variant with pretraining, achieved up to 96.4% recall and 0.60 Å mean RMSD, surpassing RDKit ETKDG—a widely used chemistry-based baseline—across all metrics. The method generalizes well to the GEOM-DRUGS dataset, showing substantial improvements in recall while maintaining competitive precision.

    The algorithm’s ability to explore the configuration space broadly, aided by its stochastic initialization and reward-based learning, results in greater conformer diversity—critical for drug discovery and molecular design.

    Conclusion: A Scalable Path Forward for Reward-Driven Generative Models

    Adjoint Sampling represents a major step forward in generative modeling without data. By leveraging scalar reward signals and an efficient on-policy training method grounded in stochastic control, it enables scalable training of diffusion-based samplers with minimal energy evaluations. Its integration of geometric symmetries and its ability to generalize across diverse molecular structures position it as a foundational tool in computational chemistry and beyond.


    Check out the Paper, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

    The post Sampling Without Data is Now Scalable: Meta AI Releases Adjoint Sampling for Reward-Driven Generative Modeling appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticlePlaywright Fixtures in Action : Create Reusable and Maintainable Tests
    Next Article Step-by-Step Guide to Create an AI agent with Google ADK

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    August 26, 2025
    Machine Learning

    Checklists Are Better Than Reward Models For Aligning Language Models

    August 23, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    book is a simple bookmark manager

    Linux

    CVE-2025-9394 – PoDoFo PDF Dictionary Parser Use After Free Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Supercharging vector search performance and relevance with pgvector 0.8.0 on Amazon Aurora PostgreSQL

    Databases

    Building a multimodal RAG based application using Amazon Bedrock Data Automation and Amazon Bedrock Knowledge Bases

    Machine Learning

    Highlights

    CVE-2025-5571 – D-Link DCS-932L OS Command Injection

    June 4, 2025

    CVE ID : CVE-2025-5571

    Published : June 4, 2025, 6:15 a.m. | 1 hour, 18 minutes ago

    Description : A vulnerability was found in D-Link DCS-932L 2.18.01. It has been classified as critical. Affected is the function setSystemAdmin of the file /setSystemAdmin. The manipulation of the argument AdminID leads to os command injection. It is possible to launch the attack remotely. The exploit has been disclosed to the public and may be used. This vulnerability only affects products that are no longer supported by the maintainer.

    Severity: 6.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    ChatGPT downplays AI’s threat to humanity despite an apparent “99.999999% probability” of inevitable doom

    May 12, 2025

    CVE-2025-48024 – BlueWave Checkmate Sensitive Data Disclosure

    May 15, 2025

    CVE-2024-40461 – Ocuco Innovation Privilege Escalation Vulnerability

    May 22, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.