Got personal financial, medical data you’d like to keep private? Good luck.
Got personal financial, medical data you’d like to keep private? Good luck.
AI and society expert warns new agentic releases to increase odds cybercriminals, hackers will be able to breach secure systems
Got debt or perhaps medical history you’d prefer colleagues, potential employers, neighbors, and friends not know about? An embarrassing email? According to Tyler Cowen, the odds will rise in the next year that more formerly secure digital systems could become breached.
The George Mason University economist, who has written and spoken extensively on AI, said new agentic AI models may help cybercriminals and amateur coders alike bypass online security — exposing millions of people’s personal data.
“If you have things you’ve said or done that are somewhere hidden but available that you’ll regret, get ready to deal with it,” Cowen, Ph.D. ’87, said at a recent campus event hosted by the Berkman Klein Center for Internet and Society. You can hope your information isn’t part of a targeted cache, but, “It’s possible that in the medium term, just everything comes out.”
According to Cowen, the Holbert L. Harris Chair of Economics at George Mason and chairman of the university’s Mercatus Center, AI companies like Anthropic and OpenAI are on the brink of releasing models with previously unseen coding and more independent, agentic capabilities that will be no match for the older security software on which many companies rely.
“If you have things you’ve said or done that are somewhere hidden but available that you’ll regret, get ready to deal with it.”
Tyler Cowen
“I believe that it does give the person that controls it the ability to hack into virtually all human systems, no matter how safe or protected we might have thought they were. And you can do this at not too great an investment of time, energy, and money,” Cowen said.
Earlier this month, a preview of Anthropic’s model known as Claude Mythos was released to tech partners, while OpenAI has made a similar move unveiling its GPT-5.4.
Because of their advanced capabilities, Cowen said, giving partner companies the opportunity to test the tech will allow those with access to better prepare their cyber defenses.
“It may accelerate the elevation of these, say, 50 institutions that are quite protected,” he said. “What you’ve done on Amazon and Facebook is the safest, because they know to invest in protection ex ante, and they have the resources to do so.”
But he said, it doesn’t mean these firms will be able to anticipate every vulnerability. Even the AI companies themselves could be at risk for unforeseen breaches.
“Anthropic and OpenAI will protect themselves against, say, external hacks, but internally, any institution is vulnerable because you hire employees. There are not security clearances of the sort you would have at the Pentagon, including at top AI firms,” Cowen said.
Moreover, Cowen warns, government agencies will likely be targets — especially at lower levels.
“I think our national security establishment has been pretty clued in on this for a while. It doesn’t mean they’ll have perfect defenses, but they will be relatively prepared,” he said. “What will be embarrassing is all the smaller parts of our government … all their deliberations, emails to each other, whatever they have will all come out, and it will just be very embarrassing, and those parts of government will lose their credibility.”
To prepare for the new models, Cowen advises that government should implement regulations to create a system of laws and penalties governing AI agents. Those would include registration, the ability to turn them off, and mandating they be connected to cloud computing to increase transparency.
“Ideally, I would like to see AI agents capitalized, and the kind of minimum capitalization required as we do for banks and many other financial institutions,” Cowen said. “But we are going to have what you might call anonymous AI agents, which are not owned or traceable to anyone or any institution. And how those will be governed is a big challenge.”
And in the most ideal world, making progress toward addressing the myriad challenges heading our way would be creating new state capacity for AI, Cowen said.
“Our government is very far from being able to do that … Let’s get the best from it we can,” he said. “We will only get it right by trial and error and making mistakes along the way.”