We’re entering the era of ‘AI unless proven otherwise’
On the way to work, you see a TikTok video of the president admitting to a crime. In the elevator, you hear your favorite band, but the song is completely unfamiliar. At your desk, you open an email from an executive in another department. It contains valid sales information and discusses a relevant legal issue, but the wording sounds oddly wooden. After lunch, the CEO sends all managers a link to a new app she had casually proposed just a few days earlier. Later, you interview a job candidate via Zoom, but the person looks different from his LinkedIn picture.
Any or all of these things—the video, the song, the email, the CEO’s app, the candidate—could have been generated by AI tools or agents. But our epistemic defaults, I’d argue, are still set to assume these things are human-created unless available information proves otherwise. We have not yet entered a “zero-trust” paradigm where content is “generated unless proven authentic.”
Instead, we find ourselves in an anxious middle ground. The question now arises whenever we encounter a new image, video, or piece of information: Is this AI-generated? Increasingly, the answer will be yes. We are close enough to that zero-trust reality that we can see it approaching on the horizon.
Beyond deepfakes
Deepfakes were just the beginning. AI-generated video designed to mislead or incite was, not so long ago, seen as a novelty. Now it’s common in everything from revenge porn to politics. AI-generated music has gone mainstream. Last year, a fully generated country song called “Walk My Walk” by Breaking Rust reached No. 1 on the Billboard Country Digital Song Sales chart in the U.S. An AI-generated TV ad, made with Google’s Veo 3, Gemini, and ChatGPT, ran during Game 3 of the NBA Finals last year.
According to a Gallup Q3 2025 report, 45% of U.S. employees now use AI at work. In a similar vein, the email deliverability firm ZeroBounce found in a September 2025 survey that one in four workers use AI daily to draft emails, and that number has likely increased. The same survey found that a quarter of workers suspect their performance review was written using AI.
By most accounts, the use of AI agents in corporate workflows is still in the early innings. But AI companies say we’re moving toward a future in which agents from different departments collaborate to complete back-office tasks, such as compensating suppliers, or to compile decision-support materials, like a business case for entering a new market or making an acquisition.
It’s already likely that AI agents, including deep research or business intelligence tools, play some role in assembling reports managers receive at work. Amazon’s AWS says its customers have used AI agents to save more than 1 million hours of manual effort. McKinsey predicts that by 2030 the use of agents and robots could create about $2.9 trillion in value in the U.S. if organizations redesign their workflows for “people, agents, and robots working together.” (Of course, McKinsey wants to help them do that.)
Depending on her technical savvy, the CEO mentioned above may have mocked up a new app using Replit or Bolt. These so-called vibe-coding tools can generate a credible proof of concept in a weekend. She may then have handed it off to software engineering, whose developers might use Claude Code, Codex, or Cursor to turn the idea into a production-ready app that connects to company databases and third-party tools. A late-2025 Stack Overflow study claims that about 84% of developers now use, or plan to use, AI coding tools, with roughly half already using them daily.
When applying for remote jobs, more candidates are trying to improve their odds with AI tools that enhance their face or voice or generate answers in real time during interviews. The voice authentication firm Pindrop says that in its own video interviews it regularly encounters applicants using deepfake software and other generative AI tools to try to land a job. Gartner predicts that by 2028 a quarter of all remote applicants will be AI-generated.
Deepfakes once threatened to distort reality; now the distortion is structural, embedded in the systems that produce culture, manage companies, and decide who gets hired.
AI, weaponized
But the scammer may have a different goal in mind, and this points to scenarios where generative AI tools aren’t just used as timesavers, but as weapons. AI can help conceal the real identities of job applicants who are trying to extract sensitive company information or, worse, secure a role in order to install ransomware.
Scammers are also increasingly using advanced face- and voice-swapping tools for outright fraud. In 2024, a team of scammers posed as top executives of the engineering firm Arup during a video call using sophisticated AI tools. They tricked a finance employee into sending them $25 million.
We sense that our epistemic defaults—our AI slop detectors, if you will—may lag behind what technology can already do. And that suspicion is correct. The holy-shit moments accompanying new AI breakthroughs now arrive with striking regularity.
Recently, some users and journalists concluded that the OpenClaw agent platform had become sentient after watching agents complete tasks independently, deploy humans to finish assignments, and then gather in their own online forum to discuss it. At the same time, many ChatGPT users are grieving the forthcoming loss of GPT-4o because they developed a personal attachment to the model. New Chinese video generation systems such as ByteDance’s Seedance 2.0 and Kling 3.0 are producing highly controllable video that’s increasingly difficult to distinguish from footage captured by a camera.
The next tech wave
Social networks, in many ways, act as intermediaries—providing a wide-angle lens through which a person sees the world. To increase engagement and ad views, Facebook distorted that lens, to the detriment of both democracy and children. This week, Facebook-parent Meta is defending itself in a Los Angeles courtroom after years of deploying design features, including endless scroll, that critics say proved harmfully addictive for younger users.
That was the last tech revolution, and it depended on user-made content. But with AI, the web can generate its own content on demand. This may put an immense amount of power in the hands of a few AI companies, perhaps even more so than was given to social media companies.
With so much money and influence at stake, the question is whether AI companies will do what firms like Meta did not and draw a clear line between human-created and machine-generated content. I seriously doubt it, especially with a billionaire class and a Trump administration doing everything possible to stifle legislation that might protect AI consumers.
If that’s the case, then maybe taking a zero-trust approach to everything that appears on our screens is the only rational path forward.