AI Summitry — Growth Beats Safety
Back in 2023, AI spooked governments. The technology was moving fast. Private companies built it, and countries became determined to regain control.
The UK launched an AI summit prioritizing safety. At Bletchley Park, leaders talked about testing, evaluation, and shared research. They encouraged the creation of AI safety institutes.
Today, that world has vanished. At this year’s New Delhi AI Impact Summit, governments treated AI less as a hazard to contain and more as an economic asset to win. Speaker after speaker emphasized affordable access to computing, investment in skills, and AI’s potential benefits.
Where the Bletchley conclusions repeat “risk” and “safety” and spell out threats tied to frontier systems, the Delhi declaration does not use “risk” or “safety” at all. It mentions that “potential security aspects remain important,” then steers the response toward voluntary measures, technical tools, and shared guidance. The core promise is growth and diffusion — not restraint.
Delhi represents the culmination of a gradual shift. After the UK summit, Seoul hosted an AI summit in 2024, which moved beyond frontier labs and into how countries use AI across the economy. Then came a Paris summit, where the branding switched from “safety” to “action,” and the main stage filled with spending plans and rollout targets.
Europe arrived in Paris with fresh announcements — the Commission’s InvestAI plan and France’s €109 billion commitment — and the summit became a contest over who could fund compute and boost AI into industry. The US came with message: scale fast, spend big, and avoid constraints, later putting a headline number on that approach through Project Stargate and its $500 billion ambition. China, meanwhile, raced ahead on industrial deployment with heavy state backing.
Paris showed the limits of a more prescriptive approach: the US and the United Kingdom refused to sign the summit statement, arguing it would slow innovation and did not do enough on security. US Vice President JD Vance criticized Europe’s “excessive regulation” of technology and warned against cooperating with China.
In New Delhi, some 91 countries endorsed the concluding declaration, including the US, UK, EU, Russia, and China. This breadth is by design. The Delhi declaration is built around “commons,” playbooks, and guiding principles that do not create hard expectations around testing, disclosure, or liability. The bar is low at a moment when AI is being deployed at breakneck speed.
Leaders frame AI as a productivity engine and as proof that regulation should stay light. The evidence remains disputed. Some forecasters have found that AI spending hasn’t been as big a driver of the US economy as was previously assumed. Other analysts argue that economic data doesn’t neatly capture the effects of huge AI spending.
For Europe, the new emphasis on growth over safety represents a challenge. Unlike the Bussels Effect that spread previous European digital regulations around the globe, few are rushing to copy the continent’s AI Act. Europe is behind on AI adoption and capacity. Only 20% of EU enterprises used AI in 2025. In the US, that number is closer to 80%. Europe’s share of global AI compute is below 5%, far behind the United States and China.
Europe’s response has three parts: simplify rules, fund AI Factories/Gigafactories, and push adoption in industry and the public sector. The intent is to reduce compliance drag, expand compute access, and speed deployment across the single market. But it still doesn’t fully shift the cost-and-scale equation. Power remains expensive, leading chips are imported, and access can fragment across sites and tools. Without stronger demand pull — especially procurement and reference customers — more compute won’t automatically produce European winners.
Switzerland will host the 2027 AI Impact Summit in Geneva, with the UAE set as partner for 2028. The summit circuit is now organized around growth and competition. The agenda will be shaped by the countries that can show up with concrete offers — access to compute, training programs, financing, and deployments that other governments can replicate.
Dr. Anda Bologa is a senior researcher in the Tech Policy Program at the Center for European Policy Analysis (CEPA).
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
Tech 2030
A Roadmap for Europe-US Tech Cooperation
The post AI Summitry — Growth Beats Safety appeared first on CEPA.