Does AI Need Humans to Evolve?

AI & Society
5min read
Author

Igor Sadoune

Published

April 25, 2025

There is an interesting phenomenon that threatens the very foundation of AI systems: mode collapse. This technical term describes a situation where GenAI models begin to produce increasingly homogeneous outputs, effectively eating their own tail like the mythical serpent Ouroboros. Mode collapse is not new, and has been well-known since the early days of adversarial learning (2014), but it is now more relevant than ever.

AI and Darwinism

If we can think of AI as a species, at least in a sci-fi sense, then why not explain it using Darwinism? In this case, we can think of human training data as a gene pool. Iterated evolution over this pool will result in a narrowing of diversity, as dominant traits prevail and weaker ones are essentially wiped out by the environment (e.g., user engagement, corporate objectives, etc.). When training AI, dominant patterns in the training data are amplified by AI models, while less common patterns are neglected. This leads to outputs that become increasingly derivative, predictable, and homogeneous.

To evolve properly, AI systems continuously need fresh human-crafted content (not like the image thumbnail of this post 😳) as training data in order to diversify the gene pool, and thus avoid inbreeding and extinction. Indeed, AI systems lack a true “mutation” mechanism that would naturally introduce novelty. In biological evolution, random mutations create new traits, but AI systems do not yet have an equivalent process for generating truly novel patterns without human input.

The Feedback Loop Problem

Large Language Model (LLM)-based AI enables the creation of varied and extensive content from a compact input—a prompt, or a series of refined prompts. It has never been easier for humans to create, to the point where not only have many content creators switched to GenAI, but new creators—empowered by AI tools—flood the market in massive numbers with new content (across social media, advertisements, writing platforms, etc.). It is hard to estimate how much of the internet consists of synthetic data, and such estimates would quickly become obsolete anyway, but as of early 2025, we can reasonably speak of at least one-third. To put things in perspective, LLMs (e.g., ChatGPT) only began to gain widespread popularity in 2023.

As AI-generated content floods online spaces, newer models inevitably train on this synthetic data, and begin to amplify their own patterns and quirks, creating a feedback loop that leads to

  1. Semantic drift - Subtle shifts in how language is used and understood
  2. Information staleness - Recycling of outdated concepts without fresh perspectives
  3. Stylistic convergence - A narrowing range of expression and voice
  4. Factual degradation - The amplification of inaccuracies through repeated regurgitation
  5. Creative homogenization - A loss of diversity in artistic and creative outputs
  6. Predictable outputs - A tendency towards uniform and less innovative results
  7. Bias Reinforcement - The perpetuation and amplification of existing biases present in training data

Note that the above list is non-exhaustive, just what I could think of at the moment.

The Grand Irony

There is a profound irony here that deserves contemplation. AI systems, developed at enormous expense to automate human creative and intellectual labor, now require human creative and intellectual labor to remain viable. The technology designed to replicate human output now needs that output as an essential resource.

In a development that borders on the surreal, major AI companies have begun hiring humans specifically to create original content for training their models. This represents a fascinating economic inversion: technology that was once heralded as a replacement for human labor now depends on that very labor for its continued functionality. Companies like OpenAI, Anthropic, and others have launched initiatives to commission writers, artists, and other creators to produce high-quality original work.

A Permanent Symbiosis?

From my perspective, this dynamic makes logical sense. AI functions as a creativity amplifier for humans while simultaneously removing financial and technological barriers to creative production, which paradoxically also makes generating fresh human-crafted content more accessible. In fact, it depends on how AI is used, as it is possible to enhance productivity by an enormous amount while preserving artistic or technical human direction. Nevertheless, this is a fragile balance, since AI, even when used as a mere productivity tool, inevitably introduces its biases into human work.

However, the discussion on the synthetic-to-real creation ratio may no longer be relevant at all if this symbiosis does not become a permanent feature of the AI-human relationship. The answer likely depends on several critical factors, the main one being the ability of AI to mutate and adapt single-handedly to introduce inherent synthetic novelty. Perhaps we have just defined here a threshold to assess if AI is becoming sentient, but this deserves a discussion on its own. In the meantime, the machines, it seems, need us after all.

To Cite This Article

@misc{SadouneBlog2025a,
  title = {Does AI Need Humans to Evolve?},
  author = {Igor Sadoune},
  year = {2025},
  url = {https://sadoune.me/posts/mode_collapse/}
}