{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

You can’t recall AI like a defective drug

At a recent AI summit in New Delhi, Sam Altman warned that early versions of superintelligence could arrive by 2028, that AI could be weaponized to create novel pathogens, and that democratic societies need to act before they are overtaken by the technology they have built. These concerns are widely shared across the industry. Geoffrey Hinton, the Nobel laureate known as “the godfather of AI,” has warned that creating digital beings more intelligent than ourselves poses a genuine existential threat. Mustafa Suleyman, CEO of Microsoft AI, devoted much of his book The Coming Wave to the argument that AI’s fusion with synthetic biology could put the tools to engineer a deadly pandemic within reach of a single individual. These are not warnings about a distant future. Last week, a clash over who controls AI and on what terms led to a complete collapse in the company’s relationship with the Pentagon.

When politicians and business leaders try to make sense of issues like these, they are often tempted to look to the pharmaceutical industry for a regulatory model. Senator Richard Blumenthal—one of the few legislators actively pushing for meaningful AI regulation—has proposed that the way the U.S. government regulates the pharmaceutical industry can serve as a model for AI oversight. The analogy makes intuitive sense. The pharma model shows that strict licensing and oversight of potentially dangerous emerging technologies can limit threats without placing undue restrictions on innovation.

The instinctive attraction of this approach isn’t confined to legislators. Many companies are applying the same logic internally—whether consciously or not—managing AI risk through stage-gate reviews, pre-deployment testing, and post-launch monitoring. The pharma model, in other words, is already the de facto governance framework for much of the industry. The problem is that it’s the wrong framework—and the differences are not just technical but existential.

Three disanalogies that matter

Pharmaceutical regulation works because the barriers to entry are high, the product is physical and controllable, and the development cycle is slow enough for oversight to keep pace. None of these conditions hold for AI.

First, barriers to entry are very different. Bringing a new drug to market costs an average of $1.1 billion, according to a 2020 study published in the Journal of the American Medical Association. The infrastructure alone—laboratories, clinical trial networks, manufacturing facilities—limits production to a relatively small number of identifiable companies that regulators can monitor. AI has no equivalent friction. Capable models can be built for a fraction of that cost, fine-tuned on consumer hardware, and deployed globally from a laptop. The universe of actors a regulator would need to track is not a handful of identifiable companies—it is potentially anyone, anywhere.

Second, a pharmaceutical product is physical. Manufacturing it requires raw materials, specialized equipment, and distribution logistics. All of this creates friction that regulators can exploit by imposing oversight checkpoints. But code has no such friction. Once released, an AI model’s weights can be copied number-for-number and shared across borders far more quickly than any physical weapon or industrial system. Its marginal cost of replication is effectively zero. And you cannot recall software the way you recall a contaminated drug. Once it is in the wild, it stays in the wild.

Even capabilities that are delivered purely through access to the cloud are vulnerable to replication and thus to the breaking of corporate or regulatory guardrails. In just the last month, Anthropic disclosed that three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—had used 24,000 accounts to generate over 16 million exchanges with Claude, extracting its most advanced capabilities through a technique called distillation. The Chinese labs did not need to infiltrate a supply chain or build expensive factories. They only needed API access and carefully crafted prompts, routed through proxy networks designed to evade detection. There is no pharmaceutical equivalent of this replicability.

The final crucial disanalogy is speed. The pharma approval pipeline assumes that a product will go through years of controlled testing before it reaches the public. But AI models evolve on software timelines. Capabilities improve not only through hardware gains but through software updates, new training methods, and frequent model releases that can produce meaningful jumps in weeks rather than years. Anthropic, for instance, shipped two major Claude releases within ten weeks. The iteration cycle is so fast that by the time any pharma-style approval process could hope to evaluate a model, that model would already be obsolete – replaced by something far more powerful for which the evaluation process had not even begun.

Why “test, deploy, monitor” doesn’t work

The problem isn’t confined to government. The same pharma-shaped thinking that distorts regulatory frameworks has taken root inside organizations—and it leaves them exposed for the same reasons.

Pharma-type risks are familiar: a product might have harmful side effects, so you test it before deployment, monitor it afterward, and pull it back if something goes wrong. Even without an external regulator, many companies are applying this logic to AI internally, managing risk via the familiar means of stage-gate reviews, pre-deployment testing, and post-launch monitoring. It feels responsible. It feels sufficient.

This is precisely the danger.

Of course, stage-gate reviews and pre-deployment testing are not worthless. They catch real errors, enforce discipline, and create a paper trail that demonstrates due diligence to boards and regulators. Any organization that has implemented them is better off than one that has done nothing. But these frameworks create a false sense of coverage. The risk they manage is the risk they were designed for—product defects, adverse effects, quality-control failures. AI’s risk profile has a different shape entirely. It is defined by the potential for irreversibility, rapid proliferation, and misuse. Not every AI-driven outcome will trigger these risks. But unlike a defective product, you cannot issue a recall once the damage is done.

This combination of potential threats means that the familiar toolkit of managed risk simply doesn’t fit—and organizations that believe it does are accepting exposures they haven’t mapped. It is precisely to meet these challenges that we developed the OPEN and CARE frameworks for managing AI innovation and risk. The CARE framework, in particular, provides a structured methodology for governing AI risk and is the foundation for the recommendations that follow.

Build governance for AI risk

The CARE framework works through four stages: Catastrophize, identifying what could go wrong; Assess, prioritizing those risks; Regulate, implementing controls; and Exit, planning for when those controls fail. Applied to your organization’s AI exposure, the framework points toward five immediate actions.

1. Surface your shadow AI exposure. Ask your direct reports one question: what AI tools are you using that weren’t provided by the company? The answers will tell you how large the gap is between the AI your organization officially uses and the AI your people are actually relying on.

2. Map your irreversibility points—and your fallbacks. Identify the AI-dependent processes where a failure would be irreversible or highly damaging, such as automated customer communications, AI-assisted code pushed to production, algorithmic hiring screens. Ask whether your current safeguards assume you can catch and correct errors before they reach the outside world. If they do, redesign them—and build explicit fallback procedures for when they fail anyway.

3. Lock down your data exposure. Every AI tool your organization touches is a data pipeline running in both directions. Classify your data into tiers—public, internal, confidential, restricted—and map which AI tools are authorized for each tier. Audit your vendor agreements for training-data clauses. The moment proprietary data enters a third-party system, your ability to recall it is gone.

4. Red team for misuse, not just malfunction. Red teaming for malfunction asks “What if this breaks?” Red-teaming for misuse asks “What if this works exactly as intended and someone uses it for the wrong purpose?” As the CARE framework’s Catastrophize phase emphasizes, you need both.

5. Assign clear executive ownership. None of the above matters if accountability is diffused across committees. Designate a single executive who owns AI risk the way your CFO owns financial risk. That person needs authority, budget, and a direct line to the board.

The real stakes

For decades, pharma-style regulation has been one of the most successful bets in business: a framework that protects the public without strangling the industry. But the model is insufficient for AI.

At the governmental level, serious people are reaching for serious solutions. Sam Altman’s call at the New Delhi summit for an international regulatory body modeled on the International Atomic Energy Agency reflects a clearer-eyed view of what kind of technology this is—one that demands oversight frameworks commensurate with its actual risk profile, not models borrowed from industries that don’t share its characteristics.

Business leaders should follow the same path. The category of problem that governments are grappling with at the international level is the same category of problem you are grappling with inside your organization. Design your governance accordingly—for the technology you actually have, not the one you wish you were dealing with.

Ria.city






Read also

$165,000 Sponsorship Presented for Lenny Peters Cup and North Carolina Rowing Championships

Firewood keeps Puri temple kitchen running, feeds thousands

Jess Phillips spends seven minutes naming women killed by a man in the last year

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости