Researchers are increasingly dialing back their enthusiasm for AI as they get more hands-on experience with it. A new preview of Wiley’s 2025 “ExplanAItions” report shows that while AI adoption among researchers jumped from 57 percent in 2024 to 84 percent in 2025, trust in AI has dropped noticeably—concerns about hallucinations rose from 51 percent to 64 percent, and fewer scientists now believe AI already surpasses humans in many use cases. This mirrors broader industry turbulence: many large firms deploying AI have reported financial losses totaling billions, and public confidence in AI remains weak even where usage is rising. As the honeymoon fades, stakeholders are being forced to reckon more candidly with AI’s shortcomings and risks.
Key Takeaways
– AI adoption among researchers is surging, but trust is slipping—many are growing skeptical of AI’s reliability and limits.
– Businesses rolling out AI are encountering real financial risks and setbacks, suggesting hype is outpacing practical performance.
– Widespread public wariness continues even as usage grows, indicating a credibility gap between technology and trust.
In-Depth
Let’s dig into what’s going on behind that headline. The core story begins with the Wiley “ExplanAItions” preview, which reveals an intriguing paradox: after years of escalating excitement around AI, the research community is now pulling back on its optimism. According to the report, AI use among researchers exploded from 57 percent in 2024 to 84 percent in 2025. But as their familiarity grew, so did the recognition of AI’s flaws. Alarmingly, the share of researchers reporting concern about “hallucinations” (i.e., AI inventing falsehoods) jumped from 51 percent to 64 percent. Equally telling, in 2024 over half believed AI was already outperforming humans in many domains—by 2025, that confidence fell below one-third. Many scientists surveyed are now actively tempering expectations, reframing AI as an assistive tool rather than an omnipotent breakthrough.
This recalibration among scientists doesn’t exist in a vacuum. In the broader corporate world, the financial toll of AI misdeployment is mounting. A recent EY survey found that nearly all major companies integrating AI suffered losses—cumulatively in the billions—often due to faulty outputs, compliance issues, or unanticipated disruptions. For many, the path from proof of concept to scalable, trustworthy deployment has proven treacherous. Meanwhile, in public perception, trust lags behind usage. Surveys show growing usage of AI tools even as trust remains low. People are using the tech more, but with hesitation and doubt. That reflects a growing credibility gap: enthusiasm is high, but confidence is fragile.
Why does this downward correction matter? First, it forces a more realistic reckoning with AI’s strengths and weaknesses. If even domain experts are pulling back, the rosy narratives need moderation. Organizations, too, may need to slow hype cycles, invest more in oversight, quality control, and validation, rather than rush to production. Finally, public trust is the currency that underpins adoption; if people feel misled or burned, backlash and regulation may follow, potentially stifling future progress.
All told, what we’re seeing now is not abandonment of AI, but maturation—a phase where hype is being supplanted by humility and critical evaluation. That’s not glamorous, but it’s essential if AI is ever going to be sustainable, credible, and socially acceptable in the long run.

