Summary: AI tools now play a confusing dual role online, both fabricating convincing misinformation and helping detect it. Understanding this tension matters because the same technology that exploits our cognitive biases can also build our defenses against false content.
Think about what you could create on your phone three and a half years ago. Now think about what you can create today. That gap is not just about better photos. It is about the difference between obviously fake content and material that is almost impossible to verify by eye. And that changes everything about how lies spread online.
What AI Misinformation Actually Looks Like
Ethan Mollick, a professor of management at Wharton Business School, came up with a clever way to track AI image progress over time. His benchmark prompt is simple: an otter on a plane using wifi. He first ran that prompt around November 2022, then again in August 2024. The difference between the two results tells you everything you need to know about how fast this technology is moving.
But misinformation is not just images. AI models can generate persuasive text in bulk, clone human voices, and create deepfakes that feel real even when they are not. The barrier to producing false content at scale has essentially dropped to zero. You no longer need a team, a budget, or even much time.
Why It Matters: The Trust Problem
A scoping review published in September 2025 in AI & Society analyzed 24 empirical studies on this exact topic. The review found that large language models can generate highly convincing misinformation that specifically exploits cognitive biases and the ideological leanings of target audiences.
That last part is what should make you pause. The AI is not just making random false claims. It is tailoring content to fit what you already want to believe. Exposure to this kind of AI-generated misinformation was found to reduce trust and influence decision-making. So the damage goes beyond believing a single false post. It chips away at your willingness to trust anything at all.
The Detection Side of the Coin
But here is where it gets complicated. The same scoping review found that LLMs also demonstrate the ability to detect false claims and enhance users' resistance to misinformation. The technology that creates the problem is also part of the solution. That is an uncomfortable but important reality.
How Misinformation Gets Stopped
One strategy that showed promise in the research is personalized corrections. Instead of showing everyone the same generic fact-check, the idea is to tailor the correction to the specific person who encountered the false claim. The review found this approach effective as a mitigation strategy.
The catch? Safeguards were found to be inconsistently applied across the board. So the tools exist, but whether they get used properly is a completely different question.
Real-World Impact on Social Media
With the right tools, you can manufacture a video in hours or even minutes that would have previously taken days with a creative team. That speed means bad actors can test dozens of variations of a false narrative and see which one gains traction. The capability to flood platforms with convincing fakes is clearly there.
The real challenge is not that AI makes lying possible. People have lied forever. The challenge is that AI makes lying scalable, fast, and personalized in ways that match what any individual viewer is most likely to accept.
So what do you do with this information? The most practical step right now is slowing down. When a piece of content perfectly confirms everything you already believe about a topic, that is exactly the moment to pause and check. Have you ever shared something on social media that you later found out was false, and if so, what made you realize it?
Comments