{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

AI is rewriting the rules of biological experiments, but safety regulations aren’t keeping up

Artificial intelligence is rapidly learning to autonomously design and run biological experiments, but the systems intended to govern those capabilities are struggling to keep pace.

AI company OpenAI and biotech company Ginkgo Bioworks announced in February 2026 that OpenAI’s flagship model GPT-5 had autonomously designed and run 36,000 biological experiments. It did this through a robotic cloud laboratory, a facility where automated equipment controlled remotely by computers carries out experiments. The AI model proposed study designs, and robots carried them out and fed the data back to the model for the next round. Humans set the goal, and the machines did much of the work in the lab, cutting the cost of producing a desired protein by 40%.

This is programmable biology: designing biological components on a computer and building them in the physical world, with AI closing the loop.

For decades, biology mostly moved from observation toward understanding. Scientists sequenced the genomes of organisms to catalog all of their DNA, learning how genes encode the proteins that carry out life’s functions. The invention of tools like CRISPR then allowed scientists to edit that DNA for specific purposes, such as disabling a gene linked to disease. AI is now accelerating a third phase, where computers can both design biological systems and rapidly test them.

The process looks less like traditional benchwork in a lab and more like engineering: design, build, test, learn, and repeat. Where a traditional experiment might test a single hypothesis, AI-driven programmable biology explores thousands of design variations in parallel, iterating the way an engineer refines a prototype.

As a data scientist who studies genomics and biosecurity, I research how AI is reshaping biological research and what safeguards that demands. Current safety measures and regulations have not kept pace with these capabilities, and the gap between what AI can do in biology and what governance systems are prepared to handle is growing.

What AI makes possible

The clearest example of how researchers are using AI to automate research is AI-accelerated protein design.

Proteins are the molecular machines that carry out most functions in living cells. Designing new ones has traditionally required years of trial and error because even small changes to a protein’s sequence can alter its shape and function in unpredictable ways.

Protein language models, which are AI systems trained on millions of natural protein sequences, can quickly predict how mutations will change a protein’s behavior or design new proteins. These AI models are designing potential new drugs and speeding vaccine development.

Paired with automated labs, these models create tight loops of experimentation and revision, testing thousands of variations in days rather than the months or years a human team would need.

Faster protein engineering could mean faster responses to emerging infections and cheaper drugs.

The dual-use problem

Researchers have raised concerns that these same AI tools could be misused, a challenge known as the dual-use problem: Technologies developed for beneficial purposes can also be repurposed to cause harm.

For example, researchers have found that AI models integrated with automated labs can optimize how well a virus spreads, even without specialized training. Scientists have developed a risk-scoring tool to evaluate how AI could modify a virus’s capabilities, such as altering which species it infects or helping it evade the immune system.

Current AI models are able to walk users through the technical steps of recovering live viruses from synthetic DNA. Researchers have determined that AI could lower barriers at multiple stages in the process of developing a bioweapon, and that current oversight does not adequately address this risk.

Risk from bio AI

Experienced scientists are already using AI to plan and design biological experiments. The question of whether AI can help people with limited biology training carry out dangerous lab work is the subject of active research.

Two recent studies have reached different conclusions.

A study by AI company Scale AI and biosecurity nonprofit SecureBio found that when people with limited biology experience were given access to large language models, which is the type of AI behind tools like ChatGPT, they were able to complete biosecurity-related tasks, such as troubleshooting complex virology lab protocols with four times greater accuracy. In some areas, these novices outperformed trained experts. Around 90% of these novices reported little difficulty getting the models to provide risky biological information, such as detailed instructions on working with dangerous pathogens, despite built-in safety filters meant to block such outputs.

In contrast, a study led by Active Site, a research nonprofit that studies the use of AI in synthetic biology, found that AI help did not lead to significant differences in the ability of novices to complete the complex workflow to produce a virus in a biosafety laboratory. However, the AI-assisted group succeeded more often on most tasks and finished some steps faster, most notably on growing cells in the lab.

Hands-on work in the lab has traditionally been a bottleneck to translating designs into results. Even a brilliant study plan still depends on skilled human hands to carry out. That may not last, as cloud laboratories and robotic automation become cheaper and more accessible, allowing researchers to send AI-generated experimental designs to remote facilities for execution.

Responding to AI-driven biological risks

AI systems are now able to run experiments autonomously and at scale, but existing regulations were not designed for this. Rules governing biological research do not account for AI-driven automation, and rules governing AI do not specifically address its use in biology.

In the U.S., the Biden administration had issued a 2023 executive order on AI security that included biosecurity provisions, but the Trump administration revoked it. Screening the synthetic DNA that commercial providers make to ensure it cannot be misused to make pathogens or toxins remains mostly voluntary. A bipartisan bill introduced in 2026 to mandate DNA screening does not yet address AI-designed sequences that evade current detection methods.

The 1975 Biological Weapons Convention, an international treaty prohibiting the production and use of bioweapons, contains no provisions for AI. The U.K. AI Security Institute and the U.S. National Security Commission on Emerging Biotechnology have both called for coordinated government action.

The safety evaluations that AI labs run before releasing new models are often opaque and unsuited to capture real-world risk. Researchers have estimated that even modest improvements in an AI model’s ability to help plan pathogen-related experiments could translate to thousands of additional deaths from bioterrorism per year. Timelines for when these capabilities cross critical thresholds remain unclear.

The Nuclear Threat Initiative has proposed a managed access framework for biological AI tools, matching who can use a given tool to the risk level of the model rather than blanket restrictions. The RAND Center on AI, Security and Technology outlined a set of actions researchers could take to improve biosecurity, including improved DNA synthesis screening and model evaluations before release. Researchers have also argued that biological data itself needs governance, especially genomic data that could train models with dangerous capabilities.

Some AI companies have started voluntarily imposing their own safety measures. Anthropic activated its highest safety tier when it released its most advanced model in mid-2025. At the same moment, OpenAI updated its Preparedness Framework, revising the thresholds for how much biological risk a model can pose before additional safeguards are required. But these are voluntary, company-specific steps. Anthropic’s CEO, Dario Amodei, wrote that the pace of AI development may soon outrun any single company’s ability to assess the risk of a given model.

When used in a well-controlled setting, AI can help scientists quickly reach their research goals. What happens when the same capabilities operate outside those controls is a question that policy has not yet answered. Overreact, and talent and investment may move elsewhere while the technology continues advancing anyway. Underreact, and the risks of that technology could be exploited to cause real harm.

Stephen D. Turner is an associate professor of data science at the University of Virginia.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Ria.city






Read also

Wordle today: Answer, hints for April 18, 2026

Eric Swalwell, California Psycho

Iranian gunboats fire on tanker in Strait of Hormuz as Iran reimposes restrictions

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости