read
Internet deep-dive

Why AI Slop Is Flooding the Web

Author: Priya Sharma | Research: James Whitfield Edit: Michael Brennan Visual: Anna Kowalski
Abstract digital network with glowing nodes representing artificial intelligence and data flowing across a dark background.
Abstract digital network with glowing nodes representing artificial intelligence and data flowing across a dark background.

Just a few years ago, the internet felt like a place made by people, for people. Now, for the first time since anyone started tracking it, web-browsing bots account for more traffic than actual humans. The web you browse every day is no longer mostly yours.

How bots took over web traffic

The numbers are striking. ClaudeBot, run by Anthropic, alone drives 13 per cent of all web traffic. OpenAI's ChatGPT-User bot adds another 6 per cent. That is nearly one in every five page visits coming from just two AI systems scraping, reading, and processing the open web.

And not all of it is harmless. Well over half of all bot traffic comes from malicious bots. These are not just search engine crawlers doing their job. They are scrapers and systems designed to vacuum up personal data for purposes the original creators never intended.

Content creators are starting to fight back. Some are using a technique called 'AI poisoning,' embedding hidden text or manipulated data into their work that tricks AI models into misreading it. The legal system is getting involved too. Disney and Universal recently sued AI company Midjourney, arguing its image generator plagiarises characters from Star Wars and Despicable Me.

The rise of AI slop on social media

The bot problem is not just about traffic stats. It is about what you actually see when you open your phone.

The results are already visible. A fake AI-generated image of two South Asian children on Facebook received nearly one million likes and heart emojis. The image was full of telltale signs it was made with AI, yet it went viral anyway. The children in it do not exist, but a million people engaged with the post and most likely had no idea.

When fake content becomes the norm

A 20-year-old student from Paris, Théodore Cazals, started an X account called 'Insane AI Slop' to document the flood of synthetic content spreading across platforms. It grew to over 133,000 followers. The account exists because the problem is now too large to ignore.

People are sharing AI-generated images, stories, and posts without realising or sometimes without caring. The term 'slop' has emerged to describe this content: fake, unconvincing material that looks plausible at a glance but carries no real meaning and no human intent behind it.

What this means for trust online

The disinformation angle makes this even more serious. In past election cycles, state-backed influence campaigns on social media required entire teams of people to generate misleading content. Now, one person with one computer can produce disinformation on a remarkable scale, as experiments with politically biased chatbots have demonstrated.

The tools have democratised manipulation.

So what happens when you can no longer trust that a photo is real, a comment came from a person, or an article was written by someone who actually exists? The web is not broken in the traditional sense. It still works. It still loads pages and serves ads. But the thing that made it valuable, the human layer, is getting buried under machine output.

The real question is not whether AI content will keep growing. It will. The question is whether anyone will bother looking for the real stuff underneath it, or whether we will simply get comfortable living inside a synthetic version of the internet. What was the last piece of content you saw online that you are completely sure was made by a real person?

Sources Sources

Tags

More people should see this article.

If you found it useful, share it in 10 seconds. Knowledge grows when shared.

Reading Settings

Comments