AI can now fake the videos we trust most. How to tell the difference—and how newsrooms can respond
The footage was real, verified, and delightful: a security camera clip of a coyote bouncing on a backyard trampoline in Los Angeles. Days after the video went viral, near-identical kangaroos, bears, and rabbits began circulating too, all generated by AI. Millions shared them, believing they’d captured another glimpse of animals behaving hilariously.
It was an amusing mix-up, but it was also a warning.
AI-generated video tools have moved far beyond producing surreal or obviously manipulated clips. They are now convincingly imitating the formats we instinctively trust most: CCTV, dashcams, police bodycams, wildlife cameras, and handheld eyewitness footage. These are the clips that shape public understanding during protests, disasters, violence, and emergencies. And the fake ones are becoming indistinguishable from the real thing.
AI-generated realism has already entered the news cycle
At Storyful, we verify thousands of real-world videos for newsrooms and brands worldwide. This year we ran a test: we fed real breaking-news headlines from our own platform into one of the newest AI video models.
In seconds, we got clips that mimicked the texture and perspective of eyewitness reporting. Not glossy AI experiments, news-like footage that could plausibly land in a newsroom inbox during a breaking story.
Side by side with the original real clips, even trained journalists needed to slow down and scrutinize the details.
Consider this example, inspired by a verified authentic video posted to social media in the wake of heavy monsoon rains in India:
Firefighters Save Man Clinging to Pole Amid Raging Indian Floods
A man was rescued on Tuesday, September 16, in India’s Uttarakhand state after spending more than four hours clinging onto an electricity pole as deadly floodwaters raged around him, local media reported.
Real:
And this fully synthetic video, created by prompting OpenAI’s video generator app Sora with the title of the first video.
Fake:
This is no longer a theoretical future. It is happening right now.
Guardrails are already slipping. Tutorials circulate openly on Reddit explaining how to remove the watermark on videos created by one of the most popular AI-video generators, OpenAI’s Sora. Restrictions on certain AI prompts can be bypassed—when they exist—or models can be run locally without curbs on highly realistic content. And because these tools can create fake CCTV or disaster footage on demand, the question isn’t whether AI can generate convincing videos of things that never happened. It’s how far will a convincing fake spread before anyone checks it?
Why AI-generated videos feel believable
The most significant shift in AI-generated video is not just its appearance, but also its behavior.
Real eyewitness footage contains the rough edges that come with real life: a shaky hand, the camera pointed at the ground before the action begins, long stretches of nothing happening, imperfect angles and missed details.
AI does not yet replicate these moments. It goes straight to the action, framed center-perfect, lit cleanly, and paced like a scene built for maximum impact. It offers the moment we expect to see, without the messy human lead-up that usually surrounds it.
The reason is simple. Most models are still trained heavily on cinematic material rather than chaotic, handheld user-generated content. They understand drama better than they understand reality. That gap is what allows verification teams to spot the difference—for now.
As models evolve and prompt-writing improves, these behavioral tells will fade. The training data for these video foundation models includes both shaky bystander videos and slick documentaries, allowing them to ably imitate their style and sense of realism.
Public confidence is already eroding
The Reuters Digital News Report finds that 58% of global audiences fear they can no longer tell real from fake online. That fear used to apply mainly to politics and propaganda. Now it applies to harmless backyard videos.
This marks a deeper psychological shift. Once a viewer starts doubting everyday videos, they don’t toggle that skepticism on and off. If they question a dog rescue, they will question a protest. If they question a prank, they will question a war zone.
Trust doesn’t collapse in a single dramatic moment. It erodes drip by drip, through thousands of small uncertainties. And as AI-generated video becomes abundant, authentic footage becomes scarce.
How to tell when a video is AI-generated
AI detection tools can be a useful part of your workflow, but they are not a replacement for human verification. According to Storyful’s analysis, current tools achieve 65–75% accuracy under ideal conditions, but that accuracy drops below 50% within weeks of a new AI model release. These are the signals Storyful’s verification teams use daily, cues the public can learn to recognize quickly.
- AI starts at the climax.
Real footage almost always includes dead time or fumbling before the action. - Subjects sit perfectly in the center of the frame.
Eyewitnesses rarely capture the chaos of breaking news like cinematographers. - Motion is too smooth
Real user-generated content stutters, shakes, refocuses, and slips. - Timestamps, signage, and license plates break down under scrutiny
AI often approximates these details instead of rendering them accurately. - Disaster and wildlife clips look “too composed.”
Real life is uncertain. AI often looks staged.
These cues won’t hold forever, but right now they offer critical protection.
Authenticity is now an asset
Tech platforms can add more guardrails to their video generator tools, regulators can update frameworks, detection tools can improve, and so can our own critical faculties. And as newsrooms help audiences navigate through the morass of fakery, the most impactful way they can rebuild trust is to be transparent.
Audiences no longer trust “sources say.” They want to see how a journalist or a newsroom knows something is real.
More news organizations are adopting verification-forward formats, including BBC Verify and CBS News Confirmed, which integrate open-source and forensic checks into reporting, examining provenance, imagery, metadata patterns, and geolocation when relevant. Storyful Newswire equips all of our partners with these basic but essential details about every video on our platform.
This transparency is becoming the primary differentiator in an environment where AI-generated video is cheap, fast, and everywhere. The more AI-generated footage floods the ecosystem, the more credibility belongs to organizations that make showing their work a key part of the story.
The internet’s most unforgettable videos were never perfect. They were unpredictable, flawed, and human, the kinds of moments AI still struggles to imagine. AI-generated footage can now mimic the visual language of truth. But it cannot yet reproduce the randomness of real life. What’s at stake when it does isn’t simply misinformation. It’s the public’s ability to trust what it sees in the moments that matter most.
James Law is Editor in Chief of Storyful, a news agency used by 70 percent of the top 20 global newsrooms specializing in verifying breaking news and viral video.