Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Researchers from Zhipu AI and Tsinghua University Introduced the ‘Self-Critique’ pipeline: Revolutionizing Mathematical Problem Solving in Large Language Models

    Researchers from Zhipu AI and Tsinghua University Introduced the ‘Self-Critique’ pipeline: Revolutionizing Mathematical Problem Solving in Large Language Models

    April 6, 2024

    The proficiency of large language models (LLMs) in deciphering the complexities of human language has been a subject of considerable acclaim. Yet, when it comes to mathematical reasoning—a skill that intertwines logic with numerical understanding—these models often falter, revealing a gap in their ability to mimic human cognitive processes comprehensively. This gap necessitates an urgent need for innovation in AI, propelling research endeavors to enhance the mathematical understanding of LLMs without diluting their linguistic prowess.

    Existing research includes the Chain of Thought prompting, refined through frameworks like Tree of Thoughts and Graph of Thoughts, guiding LLMs through structured reasoning. Supervised Fine-tuning (SFT) and Reinforcement Learning (RL) methods, as seen in WizardMath and high-quality supervisory data, have aimed at direct capability improvement. Moreover, strategies like Self-Consistency and tools like MATH-SHEPHERD enhance problem-solving. Mammoth and Tora utilize code insertion to surpass computational limits, showcasing diverse approaches to augmenting LLMs’ mathematical reasoning.

    Researchers from Zhipu.AI and Tsinghua University have introduced the “Self-Critique” pipeline, which distinguishes itself by employing the model’s output for feedback-driven enhancement. Unlike traditional methods focusing on external feedback, this approach internalizes improvement mechanisms, facilitating simultaneous advancements in mathematical reasoning and language processing capabilities.

    The methodology unfolds through a structured two-phase process. Initially, a Math-Critique model assesses the LLM’s mathematical outputs, facilitating the Rejective Fine-tuning (RFT) phase where only responses meeting a set criterion are retained for further refinement. This is followed by the Direct Preference Optimization (DPO) stage, which sharpens the LLM’s problem-solving understanding by learning from pairs of correct and incorrect answers. The efficacy of this pipeline is tested on the ChatGLM3-32B model, utilizing both established academic datasets and the specially curated MATH USER EVAL dataset to benchmark the model’s enhanced mathematical reasoning and language processing capabilities.

    The Self-Critique pipeline, applied to the ChatGLM3-32B model, demonstrated significant quantitative improvements in mathematical problem-solving. On the MATH USER EVAL dataset, the enhanced model showcased a performance boost, achieving a 17.5% increase in accuracy compared to its baseline version. Furthermore, compared with other leading models, such as InternLM2-Chat-20B and DeepSeek-Chat-67B, which observed improvements of 5.1% and 1.2% respectively, ChatGLM3-32 B’s performance stood out markedly. Furthermore, the model’s language capabilities saw a parallel enhancement, with an improvement of 6.8% in linguistic task accuracy, confirming the pipeline’s efficacy in balancing mathematical and language processing strengths.

    In summary, this research presents the “Self-Critique” pipeline, a practical tool that significantly boosts LLMs’ mathematical problem-solving capabilities while maintaining linguistic proficiency. By leveraging the model’s outputs for feedback through the Math-Critique model and implementing stages of Rejective Fine-tuning and Direct Preference Optimization, the ChatGLM3-32B model demonstrated substantial improvements in mathematical accuracy and language processing. This methodological innovation represents a significant stride towards developing more adaptable and intelligent AI systems, pointing to a promising direction for future AI research and applications.

    Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

    If you like our work, you will love our newsletter..

    Don’t Forget to join our 39k+ ML SubReddit

    The post Researchers from Zhipu AI and Tsinghua University Introduced the ‘Self-Critique’ pipeline: Revolutionizing Mathematical Problem Solving in Large Language Models appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleThis AI Paper from King’s College London Introduces a Theoretical Analysis of Neural Network Architectures Through Topos Theory
    Next Article Step by step guide to secure JDBC SSL connection with Postgres in AWS Glue

    Related Posts

    Machine Learning

    LLMs Struggle with Real Conversations: Microsoft and Salesforce Researchers Reveal a 39% Performance Drop in Multi-Turn Underspecified Tasks

    May 17, 2025
    Machine Learning

    This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency

    May 17, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    defineExpose and in Vue 3 for component interaction and theming

    Development

    You will always remember this as the day you finally caught FamousSparrow

    Development

    CVE-2025-44184 – SourceCodester Best Employee Management System Cross Site Scripting

    Common Vulnerabilities and Exposures (CVEs)

    The new iPhone SE is coming very soon: Specs, features, pricing, and more

    News & Updates

    Highlights

    Researcher Says Patched Commvault Bug Still Exploitable

    May 6, 2025

    Researcher Says Patched Commvault Bug Still Exploitable

    Source: T.Schneider via ShutterstockCertain versions of Commvault Command Center remain open to attack via a recently disclosed maximum severity vulnerability, even in supposedly patched builds of the …
    Read more

    Published Date:
    May 06, 2025 (2 hours, 35 minutes ago)

    Vulnerabilities has been mentioned in this article.

    CVE-2025-34028

    How to Delete the Last Pushed Commit

    December 12, 2024

    How to Transfer Your UXR Skills to Different Roles

    March 26, 2025

    FastVLM: Efficient Vision encoding for Vision Language Models

    April 18, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.