Wikipedia, the world's largest online encyclopedia, enacts a formal ban on all AI-generated text across its 7.1 million articles. The decision, made by volunteer editors of the English-language platform, aims to preserve the site’s commitment to accuracy, neutrality, and factual content.
The new policy, which was voted on and implemented on March 20, eliminates any ambiguity regarding the use of AI-generated text on Wikipedia. While editors can still use AI tools for proofreading and translation, the creation of new content must be done by human hands.
The ban comes in response to a surge of low-quality, AI-generated articles that have flooded the platform since the launch of ChatGPT. These articles often contain inaccuracies, hallucinations, and promotional language, violating Wikipedia’s core principles.
“We started seeing a lot of obvious signs: articles with the ‘This large language model’ prompt left in the text, entirely nonexistent citations, and overuse of words like rich cultural heritage,” says Ilyas Lebleu, an AI research student based in France who edits Wikipedia under the username Chaotic Enby. Lebleu first proposed the ban and led the internal discussions that ultimately resulted in the new policy.
The decision to ban AI-generated text was not without controversy. Editors debated various use cases for AI, including the potential inclusion of generated article summaries. However, the consensus was that the benefits of AI did not outweigh the risks of misinformation and bias.
“One person can generate AI text in five seconds and post it on Wikipedia. We can spend an hour or longer verifying everything, especially with newer models that hallucinate less and cite sources we have to try to access and verify,” Lebleu explains. “That was a huge burden on our editors, especially since it was still a gray area.”
Opponents of the ban argued that AI could be used positively to speed up the writing and source-reviewing process. Some even managed to produce a few AI-generated articles that were rated as “Good.” However, these instances were rare and did not justify the widespread issues caused by AI-generated content.
Another argument was that banning AI would simply enforce existing policies, as AI already tends to break rules. Lebleu countered this by pointing out that Wikipedia already has policy exceptions, such as for paid editors, who are more restricted to ensure neutrality.
Finally, some editors argued that it is difficult to distinguish between AI-generated and human-written text. However, Lebleu notes that people are becoming adept at detecting AI through specific keywords and structural patterns. Wikipedia even has a dedicated page, “Signs of AI Writing,” to help editors identify AI-generated content.
The ban on AI-generated text sets a precedent for other platforms struggling with similar issues. As AI continues to evolve, the need for clear guidelines and policies to maintain the integrity of online content becomes increasingly important.
While the ban marks a significant step for Wikipedia, it does not mean the end of all AI on the platform. Editors can still leverage AI for tasks that do not involve content creation, ensuring that the technology remains a useful tool while maintaining the high standards of the encyclopedia.
Subscribe to our newsletter for the latest AI news, tutorials, and expert insights delivered directly to your inbox.
We respect your privacy. Unsubscribe at any time.
Comments (0)
Add a Comment