Jimmy Wales, the founder of Wikipedia, said he sees little threat from AI-generated encyclopedias such as xAI’s Grokipedia, citing persistent accuracy issues in current artificial intelligence tools. Speaking on the sidelines of the India AI Impact Summit in New Delhi, Wales stressed that human oversight remains Wikipedia’s core strength. He argued that the platform’s volunteer-driven model continues to offer more reliable knowledge. His remarks come amid growing debate over AI’s role in information platforms. The comments were reported during an interview this week.
Wales said users trust Wikipedia primarily because its content is reviewed and vetted by humans. He emphasised that the organisation would not consider allowing AI systems to independently write articles at this stage. According to him, current AI outputs remain too error-prone for such responsibility. He noted that human contributors bring context and expertise that machines still lack. This human layer, he said, is critical for maintaining credibility.
Highlighting the risks of generative AI, Wales pointed to the ongoing problem of "hallucinations", where models produce incorrect or misleading information. He specifically mentioned tools such as ChatGPT and Gemini as examples where such issues persist. According to Wales, these inaccuracies become more frequent when dealing with niche or obscure topics. That limitation, he said, weakens the case for fully AI-generated encyclopedias. He warned that reliability remains the biggest hurdle for the technology.
Also Read: Hard Work Will Be Rewarded — DK Shivakumar's Quiet but Unmistakable CM Ambition
Because of these concerns, Wales dismissed Grokipedia as a serious competitor. He described the AI-driven encyclopedia as a “cartoon imitation” rather than a credible alternative. The Wikipedia founder argued that expert human contributors provide depth and nuance that automated systems struggle to replicate. He added that subject-matter specialists help safeguard articles from factual errors. This collaborative expertise, he said, keeps Wikipedia ahead.
Wales also cited research indicating the scale of the hallucination problem in modern AI systems. A 2025 study by OpenAI found that even advanced models can show hallucination rates as high as 79% in some tests. He warned that the issue tends to worsen as topics become more specialised. For now, Wales maintained, human-curated knowledge remains far more dependable. He concluded that AI still has significant ground to cover before posing a real threat to Wikipedia.
Also Read: Lashkar Terror Plot: IED Attack Planned Near Red Fort, Intelligence Agencies on High Alert