Rebuttal Challenges Apple’s “Reasoning Collapse” Claim in Large Language Models
Rebuttal Challenges Apple’s “Reasoning Collapse” Claim in Large Language Models Apple’s recent study, “The Illusion of Thinking,” asserted that even advanced Large Reasoning Models (LRMs) fail on complex tasks, sparking considerable debate within the AI research community. A detailed rebuttal by Alex Lawsen of Open Philanthropy, co-authored with Anthropic’s Claude Opus model, challenges this conclusion, arguing that the original paper’s findings are largely attributable to experimental design flaws rather than inherent limitations in LRM reasoning capabilities. Lawsen’s counter-argument, titled “The…