Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      June 2, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      June 2, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      June 2, 2025

      How To Prevent WordPress SQL Injection Attacks

      June 2, 2025

      How Red Hat just quietly, radically transformed enterprise server Linux

      June 2, 2025

      OpenAI wants ChatGPT to be your ‘super assistant’ – what that means

      June 2, 2025

      The best Linux VPNs of 2025: Expert tested and reviewed

      June 2, 2025

      One of my favorite gaming PCs is 60% off right now

      June 2, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      `document.currentScript` is more useful than I thought.

      June 2, 2025
      Recent

      `document.currentScript` is more useful than I thought.

      June 2, 2025

      Adobe Sensei and GenAI in Practice for Enterprise CMS

      June 2, 2025

      Over The Air Updates for React Native Apps

      June 2, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      You can now open ChatGPT on Windows 11 with Win+C (if you change the Settings)

      June 2, 2025
      Recent

      You can now open ChatGPT on Windows 11 with Win+C (if you change the Settings)

      June 2, 2025

      Microsoft says Copilot can use location to change Outlook’s UI on Android

      June 2, 2025

      TempoMail — Command Line Temporary Email in Linux

      June 2, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Advancing Test-Time Computing: Scaling System-2 Thinking for Robust and Cognitive AI

    Advancing Test-Time Computing: Scaling System-2 Thinking for Robust and Cognitive AI

    January 8, 2025

    The o1 model’s impressive performance in complex reasoning highlights the potential of test-time computing scaling, which enhances System-2 thinking by allocating greater computational effort during inference. While deep learning’s scaling effects have driven advancements in AI, particularly in LLMs like GPT, further scaling during training faces limitations due to data scarcity and computational constraints. Additionally, current models often fail in robustness and handling intricate tasks, primarily relying on fast, intuitive System-1 thinking. The o1 model, introduced by OpenAI in 2024, incorporates System-2 thinking, enabling superior performance in complex reasoning tasks through test-time computing scaling. This approach demonstrates that increasing computational effort during inference improves model accuracy, addressing some of the limitations of traditional training-phase scaling.

    System-1 and System-2 thinking, derived from cognitive psychology, are used in AI to describe different processing strategies. System-1 models rely on pattern recognition and fast, intuitive responses, lacking robustness and adaptability to distribution shifts. Earlier efforts to enhance robustness, such as test-time adaptation (TTA), focused on parameter updates or external input adjustments. However, these models were limited to weak System-2 capabilities. With the rise of LLMs, System-2 models have gained traction, allowing for incremental reasoning and the generation of intermediate steps, as seen in Chain-of-Thought (CoT) prompting. While this approach improves reasoning compared to direct output methods, it remains prone to cumulative errors. Retrieval-augmented generation (RAG) partially addresses factual inaccuracies, but its impact on reasoning abilities is limited, leaving CoT-enabled models at an early stage of System-2 thinking.

    Researchers from Soochow University, the National University of Singapore, and Ant Group explored test-time computing, tracing its evolution from System-1 to System-2 models. Initially applied to System-1 models to address distribution shifts and enhance robustness through parameter updates, input modifications, and output calibration, test-time computing now strengthens reasoning in System-2 models using strategies like repeated sampling, self-correction, and tree search. These methods enable models to solve complex problems by simulating diverse thinking patterns, reflecting on errors, and improving reasoning depth. The survey highlights this progression and further discusses future research directions for developing robust, cognitively capable AI systems.

    TTA fine-tunes models during inference using test sample information. Key considerations include learning signals, parameter updates, and ensuring efficiency. Learning signals like Test-time Training (TTT) use auxiliary tasks, while Fully Test-time Adaptation (FTTA) leverages internal feedback (e.g., entropy minimization) but requires safeguards against model collapse. Human feedback is also utilized for tasks like QA and cross-modal retrieval. To improve efficiency, parameter updates target specific layers (e.g., normalization or adapters). Techniques such as episodic TTA or exponential moving averages address catastrophic forgetting. Methods like FOA further refine adaptation by optimizing prompts without backpropagation.

    Test-time reasoning involves leveraging extended inference time to identify human-like reasoning within the decoding search space. Its two core components are feedback modeling and search strategies. Feedback modeling evaluates outputs through score-based and verbal feedback. Score-based feedback uses verifiers to score outputs based on correctness or reasoning process quality, with outcome-based and process-based approaches. Verbal feedback provides interpretability and correction suggestions via natural language critiques, often utilizing LLMs like GPT-4. Search strategies include repeated sampling and self-correction, where diverse responses are generated and refined. Multi-agent debates and self-critiques enhance reasoning by leveraging external feedback or intrinsic evaluation mechanisms.

    In conclusion, The future of test-time computing involves several key directions. First, enhancing the generalization of System-2 models beyond domain-specific tasks like math and code to support scientific discovery and weak-to-strong generalization is vital. Second, expanding multimodal reasoning by integrating modalities like speech and video and aligning processes with human cognition holds promise. Third, balancing efficiency and performance by optimizing resource allocation and integrating acceleration strategies is critical. Fourth, establishing universal scaling laws remains challenging due to diverse strategies and influencing factors. Lastly, combining multiple test-time strategies and adaptation methods can improve reasoning, advancing LLMs toward cognitive intelligence.


    Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

    🚨 FREE UPCOMING AI WEBINAR (JAN 15, 2025): Boost LLM Accuracy with Synthetic Data and Evaluation Intelligence–Join this webinar to gain actionable insights into boosting LLM model performance and accuracy while safeguarding data privacy.

    The post Advancing Test-Time Computing: Scaling System-2 Thinking for Robust and Cognitive AI appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleThis AI Paper Introduces Semantic Backpropagation and Gradient Descent: Advanced Methods for Optimizing Language-Based Agentic Systems
    Next Article The best advice to bring to work in 2025

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    June 2, 2025
    Machine Learning

    MiMo-VL-7B: A Powerful Vision-Language Model to Enhance General Visual Understanding and Multimodal Reasoning

    June 2, 2025
    Leave A Reply Cancel Reply

    Hostinger

    Continue Reading

    9 Website Menu Best Practices to Improve User Experience

    Web Development

    Automating verification of text file contents format?

    Development

    Android 15 Rolls Out Advanced Features to Protect Users from Scams and Malicious Apps

    Development

    Norma Kamali is transforming the future of fashion with AI

    Artificial Intelligence
    GetResponse

    Highlights

    Development

    Co-op CEO to Members: We’re Fighting to Protect Your Data

    May 15, 2025

    Shoppers across the UK are noticing growing gaps in product displays at Co-op stores, with…

    LLMs develop their own understanding of reality as their language abilities improve

    August 14, 2024

    Obsidian is trolling all of us by disabling one of Avowed’s most useful settings by default, and we’re mad about it

    February 20, 2025

    Microsoft Releases a Comprehensive Guide to Failure Modes in Agentic AI Systems

    April 27, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.