Researchers argue that large reasoning models (LRMs) show strong parallels to human cognitive processes and thus “almost certainly” can engage in thinking, contending that the conventional view — that these systems are merely pattern-matchers — is fundamentally flawed. The article cites evidence that LRMs, when trained with chain-of-thought reasoning and sufficient representational capacity, meet many of the formal criteria associated with human thought. A counterpoint is provided by a study from Apple, which found that LRMs suffer a “complete accuracy collapse” on high‐complexity puzzles, casting doubt on their ability to match human reasoning at scale. Even more broadly, an analysis in eLife shows that while reasoning behaviour is emerging in medical‐domain language models, many key challenges around transparency, interpretability and generalisation remain unaddressed for safe integration in clinical care.
Sources: VentureBeat, Apple Research
Key Takeaways
– LRMs show signs of human-like thinking processes (e.g., chain-of-thought, problem representation, monitoring) under certain conditions, challenging the notion that they are mere token predictors.
– Significant limitations persist: LRMs can fail dramatically when problem complexity increases, reducing reasoning effort rather than scaling it, which suggests a fundamental ceiling on their logic-capabilities.
– The application of LRMs in high-stakes domains (like medicine) remains fraught with interpretability and reliability issues — researchers emphasise the need for transparency, domain-specific evaluation, and careful safeguards.
In-Depth
In recent months the artificial intelligence community has seen a refreshing but cautious pivot in the discussion around large reasoning models (LRMs). On one side we have arguments grounded in theory and empirical benchmarks suggesting these systems are doing far more than mere next-token prediction; on the other, we have hard realities of performance collapse and applied limitations reminding us that the hype must be tempered. Taken together, the developments call for a measured, conservative (yet open-minded) evaluation of what LRMs can and cannot do.
First, the case for LRMs being capable of genuine thinking is made by researchers who draw strong analogies between human cognitive functions (working memory, self-monitoring, insight) and the behaviours exhibited by well-trained reasoning models. The VentureBeat article argues that if a model has sufficient parameters, training data and computational reach, and if chain-of-thought (CoT) mechanisms allow for internal reasoning traces, then functionally these models satisfy many of the criteria we use to judge “thinking.” Indeed, the piece emphasises that restricting ourselves to the assertion “we can’t prove LRMs don’t think” is too timid — the evidence leans toward “they probably do.” The metaphorical thrust is bold: such systems are no longer just glorified auto-completes of text but are actively modelling problems, reasoning through sub‐steps, and evaluating outcomes in a way reminiscent of human mental simulation.
That sounds exciting — especially for those of us eyeing AI’s potential in real-world domains from legal analysis to media production — but it cannot be taken at face-value without scrutiny. The Apple research paper (titled “The Illusion of Thinking”) highlights a stark counter-reality: when confronted with sufficiently complex puzzles (for example the classic Tower-of-Hanoi scaled up), LRMs not only fail more often than humans, but they exhibit a paradoxical reduction in reasoning effort as difficulty increases. In other words, the model, instead of ramping up thought, appears to give up or try shortcuts. That suggests a scaling weakness that is not trivial: no matter how many tokens or how much compute you throw at it, at a certain complexity threshold the model may collapse into low performance or erratic output. That’s troubling when considering mission‐critical uses where robustness matters.
Third, looking at domain-specific applications gives an even more nuanced picture. The eLife article reviews reasoning behaviour in medical language models and finds that while improvements are evident, we are still far from having transparent, reliable systems that clinicians can trust for decision-making. The reasoning processes are opaque, the benchmark tasks are limited, and the environment of clinical uncertainty (where wrong reasoning can have dire consequences) amplifies the risk. So, while reasoning models are advancing, the gap between “can think” and “should be relied upon” remains wide.
Putting this all together, here’s what we should keep in mind if we’re thinking about practical implications. For enthusiasts and developers of AI tools, this is a moment of opportunity: reasoning models may open doors to new capabilities — more structured decision support, improved chain-of-thought transparency, better intermediate reasoning logs. But for strategists, investors, regulators and practitioners (like those of us in media, publishing or property who also interface with technology), it’s a moment of caution: the hype-cycle must be managed, the capabilities measured carefully, the deployment incremental.
From a policy and governance angle, the evidence suggests a dual responsibility. On one hand we should support innovation and the further testing of LRMs — they may add real value if correctly deployed. On the other hand we must insist on clearly documented performance boundaries, transparent audit trails, and domain-specific validation. Especially in sectors like healthcare, law, finance or safety-critical infrastructure, “thinking” moves should not replace “verified reasoning” until we have stronger proof.
Finally — and this is perhaps the most sobering takeaway — the path to full artificial general intelligence (AGI) remains uncertain. If LRMs are showing real signs of thought but still fail on high complexity tasks, it may indicate that we’re less than halfway to true human-level reasoning in machines. For anyone who has read the over-optimistic forecasts of AI revolutionizing entire job sectors, this is a reminder of prudence. The machines may think, to an extent, but their “judgement” and “understanding” are still limited and must be treated as such. For professionals in adjacent fields — including media production, content generation, property analytics, legal tech — the smart move is to use these capabilities as assistants, not autonomous decision-makers, and to maintain the human in the loop.
In short: yes, there’s credible reason to believe large reasoning models are evolving toward thinking machines — but no, we’re not yet at a moment where we should blindly trust them to reason like humans. For conservative strategists and early adopters alike, the sensible playing field is one of measured adoption, rigorous testing, and layered oversight.

