Anthropic Launches Institute to Examine AI’s Impact on Jobs, Security, and Society
As AI companies race to build the future, Anthropic has assembled a team to examine whether that future is what we actually want.
On March 11, Anthropic introduced “The Anthropic Institute”, a new research arm focused on confronting the societal challenges AI can or will impose. Co-founder Jack Clark will lead the institute and has taken on a new role as Head of Public Benefit.
“It has an interdisciplinary staff of machine learning engineers, economists, and social scientists, bringing together and expanding three of Anthropic’s research teams,” Anthropic explained in its blog.
According to the announcement, the institute brings together three existing teams under one roof:
- Frontier Red Team: Stress-tests AI systems to understand the limits of their capabilities.
- Societal Impacts: Studies how AI is really used in the real world.
- Economic Research: Tracks AI’s impact on jobs and the broader economy.
In addition to this launch, the Anthropic team announced the expansion of its Public Policy team. Sarah Heck will be leaving her previous role as Head of External Affairs after being appointed Head of Public Policy. The company also revealed that it will open its first Washington, D.C., office this spring.
Why now?
Anthropic’s announcement is unusually transparent about the urgency driving this move.
“In the five years since Anthropic began, AI progress has moved incredibly quickly,” the company wrote in its official blog post.
“It took us two years to release our first commercial model, and just three more to develop models that can discover severe cybersecurity vulnerabilities, take on a wide range of real work, and even begin to accelerate the pace of AI development itself.”
The company says it expects “far more dramatic progress over the next two years” and believes the window to prepare society is narrowing fast.
They further elaborated that the Institute intends to tackle questions ranging from “how will powerful AI reshape jobs and economies?” to “What threats will it magnify or introduce?” and “if the recursive self-improvement of AI systems does begin to occur, who in the world should be made aware, and how should these systems be governed?”, etc.
It’s important to note that Anthropic had previously warned the White House that the window to act on AI security is closing, projecting that systems with Nobel Prize-level intellect could emerge as soon as 2026. Also, just weeks ago, Mrinank Sharma, a former senior AI safety leader at Anthropic, resigned, warning that “the world is in peril”.
The institute seems to be a response to that, setting a team that asks the uncomfortable questions out loud rather than keeping them hidden in internal memos.
Who’s running the Institute?
So far, the company has made a few key impressive hires to the founding team. The include:
Matt Botvinick, a Resident Fellow at Yale Law School, is a former Senior Director of Research at Google DeepMind and a former professor of neural computation at Princeton. He leads the Institute’s work on AI and the rule of law.
Anton Korinek is joining the company while on leave from the University of Virginia. He joins the Economic Research team and will lead the study of “how transformative AI could reshape the very nature of economic activity”.
Zoë Hitzig, who previously studied AI’s social and economic impacts at OpenAI, is joining to connect Anthropic’s economics work to model training and development. It is clearly not a conventional safety team and rather closer to a policy intelligence unit.
What makes it different?
Anthropic says the Institute is different from standard corporate research. In the company’s words, it is “a two-way street.”
“The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess,” Anthropic wrote in its blog.
The Institute will not only publish its findings for internal use. It will also engage directly with workers facing job loss, industries being disrupted, and other communities that feel the future is advancing without them and do not have a clear strategy to respond.
“What we learn will inform what the Institute studies, and how our company as a whole chooses to act,” Anthropic writes.
The Institute is incubating new teams with work already underway. It is also hiring a small team of analysts to disseminate its research to the world.
This level of transparency is exactly what critics of the AI industry have long demanded. Cross-company safety testing between Anthropic and OpenAI last year showed how rare genuine accountability efforts still are. Whether the Institute delivers on its mandate will depend on one thing: whether its findings are allowed to change what Anthropic actually does.
Also read: Microsoft has stepped in to support Anthropic after the Pentagon labeled the AI firm a “supply chain risk,” escalating a dispute over whether AI should be used for mass surveillance or autonomous weapons.
The post Anthropic Launches Institute to Examine AI’s Impact on Jobs, Security, and Society appeared first on eWEEK.