Librarians are increasingly frustrated by public claims that libraries are secretly withholding books that only AI systems know about; instead, the confusion stems from AI chatbots generating completely fabricated titles and citations that users then assume are real, leading to wasted time and false expectations for professional researchers. According to librarians at the Library of Virginia, roughly 15 % of reference questions they receive come from AI-generated prompts that reference books or records that don’t exist, and patrons often trust the AI’s hallucinations over expert human guidance. Organizations like the International Committee of the Red Cross have publicly clarified that missing references don’t indicate withheld archives, but rather incomplete citations or outright AI inventions, underscoring the limits of current generative systems. Simultaneously, reporting from other outlets notes that this phenomenon of AI-hallucinated books and articles has been observed at libraries nationwide, with librarians repeatedly asked to track down nonexistent materials, further illustrating the practical challenges digital tools are creating for information professionals.
Key Takeaways
– AI chatbots frequently fabricate book titles and citations, leading to mistaken beliefs that libraries are withholding information.
– Librarians report a significant and growing burden from requests based on nonexistent AI-generated materials.
– Institutions like the International Committee of the Red Cross have publicly warned that AI hallucinations are causing confusion, not secret archives.
In-Depth
Across libraries and archival institutions, seasoned librarians are pushing back against an odd new myth: that libraries are secretly holding books and records that only artificial intelligence systems can find. This narrative, often repeated on social media and in casual conversation, has no basis in reality. What is real is a surge in AI systems confidently inventing titles, authors, and journal articles that don’t exist, and users accepting these outputs as factual. At the Library of Virginia, reference staff estimate that about fifteen percent of emailed research questions originate from AI tools, with the common thread being phantom books or fabricated citations. This isn’t a matter of librarians hiding information; it’s a symptom of how generative AI models work. They don’t verify facts in the way trained researchers or cataloguers do. Instead, they generate plausible-sounding text based on patterns in training data, and that can include completely nonexistent works.
The International Committee of the Red Cross has even issued guidance explaining that missing records often reflect incomplete citations or misattributions—not hidden archives—and that researchers should investigate administrative histories instead of assuming secrecy. Other reporting from information professionals highlights the toll this takes on library staff, who must repeatedly explain to patrons that what they asked for was a hallucination of the AI, not a real document.
Conservative observers might see this as another example of technology outpacing common sense: people trusting impressive-sounding AI over experienced experts. When a tool speaks in a confident tone, it doesn’t make its output accurate. Libraries have long stood as bastions of verified information, with catalogues and indexing systems built on decades of careful scholarship. AI systems, by contrast, excel at mimicking authority without the underlying verification. That contrast has never been more apparent than in the current wave of fabricated references.
The result is a practical challenge for librarians who now spend time debunking digital illusions instead of helping with real research. It also highlights a broader need for public understanding of what AI can—and importantly cannot—do. Confidence in a chatbot’s output does not equate to truth, and mistaking generative fabrications for genuine scholarship undermines both research and trust in human expertise. This isn’t about hidden knowledge; it’s about misplaced faith in technology that sounds authoritative but lacks grounding in verifiable fact.

