Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      This week in AI dev tools: Gemini 2.5 Pro and Flash GA, GitHub Copilot Spaces, and more (June 20, 2025)

      June 20, 2025

      Gemini 2.5 Pro and Flash are generally available and Gemini 2.5 Flash-Lite preview is announced

      June 19, 2025

      CSS Cascade Layers Vs. BEM Vs. Utility Classes: Specificity Control

      June 19, 2025

      IBM launches new integration to help unify AI security and governance

      June 18, 2025

      I’ve tested dozens of robot vacuums. These are the three I recommend most to family and friends

      June 20, 2025

      These apps are quietly draining your phone battery – how to find and shut them down

      June 20, 2025

      184 million passwords for Google, Microsoft, Facebook, and more leaked in massive data breach

      June 20, 2025

      I tested the world’s thinnest SSD enclosure – here’s why it’s the perfect PC accessory for me

      June 20, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Dr. Axel’s JavaScript flashcards

      June 20, 2025
      Recent

      Dr. Axel’s JavaScript flashcards

      June 20, 2025

      Syntax-Highlight – Custom Element For Syntax Highlighting Content

      June 20, 2025

      WelsonJS – Build a Windows app on the Windows built-in JavaScript engine

      June 20, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Windows 11 Build 27881 brings speech recap, profanity toggle, and smarter sharing

      June 20, 2025
      Recent

      Windows 11 Build 27881 brings speech recap, profanity toggle, and smarter sharing

      June 20, 2025

      Windows 11 KB5060829 update rolls out to Release Preview with taskbar & voice upgrades

      June 20, 2025

      Star Citizen Alpha 4.2 ‘Storm Breaker’ brings radiation, weather, and a deadly new event

      June 20, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»PoE-World + Planner Outperforms Reinforcement Learning RL Baselines in Montezuma’s Revenge with Minimal Demonstration Data

    PoE-World + Planner Outperforms Reinforcement Learning RL Baselines in Montezuma’s Revenge with Minimal Demonstration Data

    June 20, 2025

    The Importance of Symbolic Reasoning in World Modeling

    Understanding how the world works is key to creating AI agents that can adapt to complex situations. While neural network-based models, such as Dreamer, offer flexibility, they require massive amounts of data to learn effectively, far more than humans typically do. On the other hand, newer methods use program synthesis with large language models to generate code-based world models. These are more data-efficient and can generalize well from limited input. However, their use has been mostly limited to simple domains, such as text or grid worlds, as scaling to complex, dynamic environments remains a challenge due to the difficulty of generating large, comprehensive programs.

    Limitations of Existing Programmatic World Models

    Recent research has investigated the use of programs to represent world models, often leveraging large language models to synthesize Python transition functions. Approaches like WorldCoder and CodeWorldModels generate a single, large program, which limits their scalability in complex environments and their ability to handle uncertainty and partial observability. Some studies focus on high-level symbolic models for robotic planning by integrating visual input with abstract reasoning. Earlier efforts employed restricted domain-specific languages tailored to specific benchmarks or utilized conceptually related structures, such as factor graphs in Schema Networks. Theoretical models, such as AIXI, also explore world modeling using Turing machines and history-based representations.

    Introducing PoE-World: Modular and Probabilistic World Models

    Researchers from Cornell, Cambridge, The Alan Turing Institute, and Dalhousie University introduce PoE-World, an approach to learning symbolic world models by combining many small, LLM-synthesized programs, each capturing a specific rule of the environment. Instead of creating one large program, PoE-World builds a modular, probabilistic structure that can learn from brief demonstrations. This setup supports generalization to new situations, allowing agents to plan effectively, even in complex games like Pong and Montezuma’s Revenge. While it doesn’t model raw pixel data, it learns from symbolic object observations and emphasizes accurate modeling over exploration for efficient decision-making.

    Architecture and Learning Mechanism of PoE-World

    PoE-World models the environment as a combination of small, interpretable Python programs called programmatic experts, each responsible for a specific rule or behavior. These experts are weighted and combined to predict future states based on past observations and actions. By treating features as conditionally independent and learning from the full history, the model remains modular and scalable. Hard constraints refine predictions, and experts are updated or pruned as new data is collected. The model supports planning and reinforcement learning by simulating likely future outcomes, enabling efficient decision-making. Programs are synthesized using LLMs and interpreted probabilistically, with expert weights optimized via gradient descent.

    Empirical Evaluation on Atari Games

    The study evaluates their agent, PoE-World + Planner, on Atari’s Pong and Montezuma’s Revenge, including harder, modified versions of these games. Using minimal demonstration data, their method outperforms baselines such as PPO, ReAct, and WorldCoder, particularly in low-data settings. PoE-World demonstrates strong generalization by accurately modeling game dynamics, even in altered environments without new demonstrations. It’s also the only method to consistently score positively in Montezuma’s Revenge. Pre-training policies in PoE-World’s simulated environment accelerate real-world learning. Unlike WorldCoder’s limited and sometimes inaccurate models, PoE-World produces more detailed, constraint-aware representations, leading to better planning and more realistic in-game behavior.

    Conclusion: Symbolic, Modular Programs for Scalable AI Planning

    In conclusion, understanding how the world works is crucial to building adaptive AI agents; however, traditional deep learning models require large datasets and struggle to update flexibly with limited input. Inspired by how humans and symbolic systems recombine knowledge, the study proposes PoE-World. This method utilizes large language models to synthesize modular, programmatic “experts” that represent different parts of the world. These experts combine compositionally to form a symbolic, interpretable world model that supports strong generalization from minimal data. Tested on Atari games like Pong and Montezuma’s Revenge, this approach demonstrates efficient planning and performance, even in unfamiliar scenarios. Code and demos are publicly available.


    Check out the Paper, Project Page and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post PoE-World + Planner Outperforms Reinforcement Learning RL Baselines in Montezuma’s Revenge with Minimal Demonstration Data appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticlePhonetically-Augmented Discriminative Rescoring for Voice Search Error Correction
    Next Article Scaling Laws for Unsupervised Finetuning of LLMs

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 20, 2025
    Machine Learning

    Discriminating Form and Meaning in Multilingual Models with Minimal-Pair ABX Tasks

    June 20, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Can LLMs Debug Like Humans? Microsoft Introduces Debug-Gym for AI Coding Agents

    Machine Learning

    CVE-2025-3805 – Jinja2 Template Handler Local File Injection Vulnerability in Sarrionandia Tournatrack

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-6155 – “PHPGurukul Hostel Management System SQL Injection Vulnerability”

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2024-55567 – Insyde H2O UsbCoreDxe SMM Call Out Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    This 10-port charging station solved my home office’s biggest issue (and it’s on sale)

    June 6, 2025

    Plugable’s 10-Port USB-C charging station provides 100W of power and supports priority charging for 10…

    CVE-2025-4373 – GLib Integer Overflow Buffer Underwrite

    May 6, 2025

    CVE-2025-48064 – GitHub Desktop Windows Network Share Path Traversal Information Disclosure

    May 21, 2025

    CVE-2025-30418 – NI Circuit Design Suite SymbolEditor Out-of-Bounds Write Vulnerability

    May 15, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.