Reddit Wants You to Prove You Are Human
Reddit on Thursday (March 26) began rolling out human verification requirements targeting accounts that display suspicious behavior, denoting a significant change in how major platforms respond to the accelerating infiltration of artificial intelligence (AI)-generated content.
According to a Thursday (March 25) report by TechCrunch, accounts flagged by automated signals, including posting speed and activity patterns, must now confirm they are operated by a real person.
The announcement followed the collapse of Digg, a Reddit competitor that shut down earlier this month after failing to contain a bot infestation.
From Content Moderation to Identity Layer
Reddit’s rollout is not a blanket verification requirement. As reported by TechCrunch, said verification triggers only when account-level signals suggest automated behavior, and the company commits to a privacy-first approach that confirms a person exists behind the account without revealing who that person is.
To verify, Reddit said it will support authentication methods such as passkeys and other verification tools designed to confirm human users while protecting privacy. In some regions, verification may intersect with local age and identity requirements.
Platforms have long prized anonymity as a design principle, and as reported by Mashable, mandatory ID verification has historically drawn resistance from communities that depend on pseudonymous participation.
Reddit CEO Steve Huffman addressed the trade-off directly in his announcement post, stating that the goal is to confirm a person exists, not to unmask them. Verification friction, however, cuts both ways: it deters bots and bad actors, but it also deters the real users platforms need for growth.
Bot activity on Reddit has intensified beyond simple spam. As Wired reported in February, AI bots now drive significant web traffic, functioning not as a nuisance but as a structural feature of the post-ChatGPT web. On Reddit specifically, bots are deployed to manipulate narratives, astroturf on behalf of companies, repost links at scale and, in documented cases, generate questions designed to extract AI training data from the platform’s organic user responses.
The Cost of Getting Identity Wrong
The operational and financial costs of weak identity systems extend well beyond social media. According to PYMNTS Intelligence data, in collaboration with Trulioo, companies lose approximately 3.1% of annual revenue to identity gaps, totaling roughly $95 billion in losses. The report found that 59% of firms face bot-driven fraud as an active threat, while 90% contend with harmful bot traffic in some form. For enterprises, the downstream damage compounds quickly: inflated engagement metrics mislead product and marketing decisions, fraudulent accounts corrupt behavioral data and synthetic sentiment distorts the signals executives and investors use to gauge platform health.
Proof of Personhood as Internet Infrastructure
The pressure Reddit is responding to shows a structural shift reordering how the internet functions. According to CNBC, the line between bot behavior and human behavior online is blurring, with AI agents now mimicking conversational patterns, posting cadences and engagement habits that previously served as exclusive markers of human activity. Engagement metrics, long the currency of platform value, are losing reliability as a measure of real human interest.
The timeline for majority-bot internet traffic is tighter than most anticipated. As reported by TechCrunch, Cloudflare CEO Matthew Prince told SXSW this month that bot traffic will exceed human traffic online by 2027, driven by AI agents that visit thousands of websites to fulfill a single user query. Before the generative AI era, bots accounted for roughly 20% of internet traffic, Prince said. That share now accelerates at a rate with no precedent in prior platform cycles.
For platforms built on anonymity, like Reddit, this creates a fine balance. The internet has long operated on the assumption that users can participate without revealing their real-world identities, but as AI agents scale, that model is beginning to fracture. The tension is giving rise to growing demand for proof of personhood, a shift that comes as platforms also tighten control over how external AI agents access their systems.
The post Reddit Wants You to Prove You Are Human appeared first on PYMNTS.com.