Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Error’d: You Talkin’ to Me?

      September 20, 2025

      The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence

      September 20, 2025

      This week in AI updates: OpenAI Codex updates, Claude integration in Xcode 26, and more (September 19, 2025)

      September 20, 2025

      Report: The major factors driving employee disengagement in 2025

      September 20, 2025

      DistroWatch Weekly, Issue 1140

      September 21, 2025

      Distribution Release: DietPi 9.17

      September 21, 2025

      Development Release: Zorin OS 18 Beta

      September 19, 2025

      Distribution Release: IPFire 2.29 Core 197

      September 19, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      @ts-ignore is almost always the worst option

      September 22, 2025
      Recent

      @ts-ignore is almost always the worst option

      September 22, 2025

      MutativeJS v1.3.0 is out with massive performance gains

      September 22, 2025

      Student Performance Prediction System using Python Machine Learning (ML)

      September 21, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      DistroWatch Weekly, Issue 1140

      September 21, 2025
      Recent

      DistroWatch Weekly, Issue 1140

      September 21, 2025

      Distribution Release: DietPi 9.17

      September 21, 2025

      Hyprland Made Easy: Preconfigured Beautiful Distros

      September 20, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Why Apple’s Critique of AI Reasoning Is Premature

    Why Apple’s Critique of AI Reasoning Is Premature

    June 22, 2025

    The debate around the reasoning capabilities of Large Reasoning Models (LRMs) has been recently invigorated by two prominent yet conflicting papers: Apple’s “Illusion of Thinking” and Anthropic’s rebuttal titled “The Illusion of the Illusion of Thinking”. Apple’s paper claims fundamental limits in LRMs’ reasoning abilities, while Anthropic argues these claims stem from evaluation shortcomings rather than model failures.

    Apple’s study systematically tested LRMs on controlled puzzle environments, observing an “accuracy collapse” beyond specific complexity thresholds. These models, such as Claude-3.7 Sonnet and DeepSeek-R1, reportedly failed to solve puzzles like Tower of Hanoi and River Crossing as complexity increased, even exhibiting reduced reasoning effort (token usage) at higher complexities. Apple identified three distinct complexity regimes: standard LLMs outperform LRMs at low complexity, LRMs excel at medium complexity, and both collapse at high complexity. Critically, Apple’s evaluations concluded that LRMs’ limitations were due to their inability to apply exact computation and consistent algorithmic reasoning across puzzles.

    Anthropic, however, sharply challenges Apple’s conclusions, identifying critical flaws in the experimental design rather than the models themselves. They highlight three major issues:

    1. Token Limitations vs. Logical Failures: Anthropic emphasizes that failures observed in Apple’s Tower of Hanoi experiments were primarily due to output token limits rather than reasoning deficits. Models explicitly noted their token constraints, deliberately truncating their outputs. Thus, what appeared as “reasoning collapse” was essentially a practical limitation, not cognitive failure.
    2. Misclassification of Reasoning Breakdown: Anthropic identifies that Apple’s automated evaluation framework misinterpreted intentional truncations as reasoning failures. This rigid scoring method didn’t accommodate models’ awareness and decision-making regarding output length, leading to unjustly penalizing LRMs.
    3. Unsolvable Problems Misinterpreted: Perhaps most significantly, Anthropic demonstrates that some of Apple’s River Crossing benchmarks were mathematically impossible to solve (e.g., cases with six or more individuals with a boat capacity of three). Scoring these unsolvable instances as failures drastically skewed the results, making models appear incapable of solving fundamentally unsolvable puzzles.

    Anthropic further tested an alternative representation method—asking models to provide concise solutions (like Lua functions)—and found high accuracy even on complex puzzles previously labeled as failures. This outcome clearly indicates the issue was with evaluation methods rather than reasoning capabilities.

    Another key point raised by Anthropic pertains to the complexity metric used by Apple—compositional depth (number of required moves). They argue this metric conflates mechanical execution with genuine cognitive difficulty. For example, while Tower of Hanoi puzzles require exponentially more moves, each decision step is trivial, whereas puzzles like River Crossing involve fewer steps but significantly higher cognitive complexity due to constraint satisfaction and search requirements.

    Both papers significantly contribute to understanding LRMs, but the tension between their findings underscores a critical gap in current AI evaluation practices. Apple’s conclusion—that LRMs inherently lack robust, generalizable reasoning—is substantially weakened by Anthropic’s critique. Instead, Anthropic’s findings suggest LRMs are constrained by their testing environments and evaluation frameworks rather than their intrinsic reasoning capacities.

    Given these insights, future research and practical evaluations of LRMs must:

    • Differentiate Clearly Between Reasoning and Practical Constraints: Tests should accommodate the practical realities of token limits and model decision-making.
    • Validate Problem Solvability: Ensuring puzzles or problems tested are solvable is essential for fair evaluation.
    • Refine Complexity Metrics: Metrics must reflect genuine cognitive challenges, not merely the volume of mechanical execution steps.
    • Explore Diverse Solution Formats: Assessing LRMs’ capabilities across various solution representations can better reveal their underlying reasoning strengths.

    Ultimately, Apple’s claim that LRMs “can’t really reason” appears premature. Anthropic’s rebuttal demonstrates that LRMs indeed possess sophisticated reasoning capabilities that can handle substantial cognitive tasks when evaluated correctly. However, it also stresses the importance of careful, nuanced evaluation methods to truly understand the capabilities—and limitations—of emerging AI models.


    Check out the Apple Paper and Anthropic Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post Why Apple’s Critique of AI Reasoning Is Premature appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleIBM’s MCP Gateway: A Unified FastAPI-Based Model Context Protocol Gateway for Next-Gen AI Toolchains
    Next Article Texas A&M Researchers Introduce a Two-Phase Machine Learning Method Named ‘ShockCast’ for High-Speed Flow Simulation with Neural Temporal Re-Meshing

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    September 3, 2025
    Machine Learning

    Announcing the new cluster creation experience for Amazon SageMaker HyperPod

    September 3, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-3476 – OpenText Operations Bridge Manager Authorization Bypass

    Common Vulnerabilities and Exposures (CVEs)

    Rilasciata ExTiX Deepin 25.7: Una distribuzione GNU/Linux basata su Deepin 25 Stabile

    Linux

    CryptoFortress mimics TorrentLocker but is a different ransomware

    Development

    CVE-2025-4282 – SourceCodester Oretnom23 Stock Management System CSRF Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Development

    How to Register Models in Django Admin

    April 29, 2025

    When you’re building a website or an app with Django, one of the most exciting…

    Carnegie Mellon University at ICLR 2025

    April 23, 2025

    AI Spam Threatens cURL’s Bug Bounty Program: Developer Considers Shutting It Down

    July 16, 2025

    ZeroBrane Studio – lightweight Lua-based IDE for Lua

    May 16, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.