Wiki Editor

Wikipedia editing and management services

AI can’t kill Wikipedia

Since AI can autogenerate information, will it eventually replace Wikipedia as the web's main source of information?

The wheels are in motion. AI is spreading. Fast. Hypothetically, prompting the machine will be the new way to do everything, from Social Media posts to quarterly presentations, to set a travel destination, to choose food, … For those involved in the future of Wikipedia, the grand question is: Will AI replace traditonal human users on Wikipedia?

Let’s start with an obvious statement: AI technologies can and probably will create a complete AI-driven encyclopedia, one that can certainly be more exhaustive and better self-managed than Wikipedia. When that happens, the next question will be: what’s better, Wikipedia’s human version of knowledge, or AI’s version?

In the field of information, accountability matters for us humans, it enables trust (“Who gave you that info? How do you know that?”). All major news manufacturers have a new AI division to lead experiments. It appears that AI can do most of the content-production work, but it has two major issues:

  1. It lacks a human touch.
  2. It has the potential to make up information to patch stories together.

So whatever information the machine outputs, humans have to verify and “humanize it” before broadcasting it.

AI faces many legal issues regarding the reuse of copyrighted content. Attributing sources to content will probably become obligatory in AI design, to allow humans to verify AI’s accurate and legally-reusable output. We already have an example of that with Google’s AI snippets showing on top of search results. You can see there that Google does a poor job in terms of sources, with commercial websites and primary sources used to generate the snippets. AI seems to be programmed to fetch and rephrase information found on generic websites, it does not recreate that information from research and synthesis work.

AI needs human-generated sources to build its artificial knowledge, otherwise it will just run an infinite loop on its own computer-generated knowledge, which leads to the distribution of distorted and erroneous information. The LLMs that feed AI’s capabilities often fetch their knowledge from Wikipedia. So there is absolutely no point in making AI create content on Wikipedia, otherwise we’re killing a human-generated source that’s necessary to avoid infinite loops of false information. Wikipedia was already playing a central role in the search-based web, but it will play an ever bigger role in the AI-based web.

So, even if Wikipedia was facing the competition of a newer, AI-fueled version of Wikipedia, the latter would still depend on the former to remain accurate. Human patrolling on information will become even more important as the potential to spiral out of control will multiply.

On Wikipedia’s end, new metrics segments need to be developed to keep promoting itself as a major source of information:

  1. The “spider” value in views stats should be more detailed. Technically, spiders are not humans, so there is no privacy issue in releasing the name of the spiders that crawl any Wikipedia page.
  2. A new metrics segment should be API calls to a page, because a lot of traffic will come from there.
  3. Finally, everytime an AI system uses a specific source to generate content, it should attribute this source and ping it to let it know how much of the source was used in the process. When that happens, Wikipedia stats should include those pings.

Not letting AI produce Wikipedia content does not mean that Wikipedia should not integrate AI features. For example:

  • An AI tool on Wikipedia could identify missing information on a page and suggest a user to add it.
  • Users could submit a link to Wikipedia, and, after analyzing the content of the article, an AI tool could suggest what to edit on Wikipedia with that source.
  • In editorial conflicts, AI could analyze the NPOV of each argument to balance out the debates (please!).
  • Image generation is also an interesting feature for Wikipedia as it struggles with copyrights in general.

Wikipedia’s newcomer tool already assist new users in making simple edits on theme-based pages. Powering this tool with AI capabilities like the ones suggested above could be Wikipedia’s easy road to AI empowerment.