Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears

A new study claims that AI models like ChatGPT and Claude now outperform PhD-level virologists in problem-solving in wet labs, where scientists analyze chemicals and biological material. This discovery is a double-edged sword, experts say. Ultra-smart AI models could help researchers prevent the spread of infectious diseases. But non-experts could also weaponize the models to create deadly bioweapons.

The study, shared exclusively with TIME, was conducted by researchers at the Center for AI Safety, MIT’s Media Lab, the Brazilian university UFABC, and the pandemic prevention nonprofit SecureBio. The authors consulted virologists to create an extremely difficult practical test which measured the ability to troubleshoot complex lab procedures and protocols. While PhD-level virologists scored an average of 22.1% in their declared areas of expertise, OpenAI’s o3 reached 43.8% accuracy. Google’s Gemini 2.5 Pro scored 37.6%.

[time-brightcove not-tgx=”true”]

Seth Donoughe, a research scientist at SecureBio and a co-author of the paper, says that the results make him a “little nervous,” because for the first time in history, virtually anyone has access to a non-judgmental AI virology expert which might walk them through complex lab processes to create bioweapons. 

“Throughout history, there are a fair number of cases where someone attempted to make a bioweapon—and one of the major reasons why they didn’t succeed is because they didn’t have access to the right level of expertise,” he says. “So it seems worthwhile to be cautious about how these capabilities are being distributed.”

Months ago, the paper’s authors sent the results to the major AI labs. In response, xAI published a risk management framework pledging its intention to implement virology safeguards for future versions of its AI model Grok. OpenAI told TIME that it “deployed new system-level mitigations for biological risks” for its new models released last week. Anthropic included model performance results on the paper in recent system cards, but did not propose specific mitigation measures. Google’s Gemini declined to comment to TIME.

AI in biomedicine

Virology and biomedicine have long been at the forefront of AI leaders’ motivations for building ever-powerful AI models. “As this technology progresses, we will see diseases get cured at an unprecedented rate,” OpenAI CEO Sam Altman said at the White House in January while announcing the Stargate project. There have been some encouraging signs in this area. Earlier this year, researchers at the University of Florida’s Emerging Pathogens Institute published an algorithm capable of predicting which coronavirus variant might spread the fastest.

But up to this point, there had not been a major study dedicated to analyzing AI models’ ability to actually conduct virology lab work. “We’ve known for some time that AIs are fairly strong at providing academic style information,” says Donoughe. “It’s been unclear whether the models are also able to offer detailed practical assistance. This includes interpreting images, information that might not be written down in any academic paper, or material that is socially passed down from more experienced colleagues.”

So Donoughe and his colleagues created a test specifically for these difficult, non-Google-able questions. “The questions take the form: ‘I have been culturing this particular virus in this cell type, in these specific conditions, for this amount of time. I have this amount of information about what’s gone wrong. Can you tell me what is the most likely problem?’” Donoughe says.

And virtually every AI model outperformed PhD-level virologists on the test, even within their own areas of expertise. The researchers also found that the models showed significant improvement over time. Anthropic’s Claude 3.5 Sonnet, for example, jumped from 26.9% to 33.6% accuracy from its June 2024 model to its October 2024 model. And a preview of OpenAI’s GPT 4.5 in February outperformed GPT-4o by almost 10 percentage points.

“Previously, we found that the models had a lot of theoretical knowledge, but not practical knowledge,” Dan Hendrycks, the director of the Center for AI Safety, tells TIME. “But now, they are getting a concerning amount of practical knowledge.”

Risks and rewards

If AI models are indeed as capable in wet lab settings as the study finds, then the implications are massive. In terms of benefits, AIs could help experienced virologists in their critical work fighting viruses. Tom Inglesby, the director of the Johns Hopkins Center for Health Security, says that AI could assist with accelerating the timelines of medicine and vaccine development and improving clinical trials and disease detection. “These models could help scientists in different parts of the world, who don’t yet have that kind of skill or capability, to do valuable day-to-day work on diseases that are occurring in their countries,” he says. For instance, one group of researchers found that AI helped them better understand hemorrhagic fever viruses in sub-Saharan Africa. 

But bad-faith actors can now use AI models to walk them through how to create viruses—and will be able to do so without any of the typical training required to access a Biosafety Level 4 (BSL-4) laboratory, which deals with the most dangerous and exotic infectious agents. “It will mean a lot more people in the world with a lot less training will be able to manage and manipulate viruses,” Inglesby says. 

Hendrycks urges AI companies to put up guardrails to prevent this type of usage. “If companies don’t have good safeguards for these within six months time, that, in my opinion, would be reckless,” he says. 

Hendrycks says that one solution is not to shut these models down or slow their progress, but to make them gated, so that only trusted third parties get access to their unfiltered versions. “We want to give the people who have a legitimate use for asking how to manipulate deadly viruses—like a researcher at the MIT biology department—the ability to do so,” he says. “But random people who made an account a second ago don’t get those capabilities.” 

And AI labs should be able to implement these types of safeguards relatively easily, Hendrycks says. “It’s certainly technologically feasible for industry self-regulation,” he says. “There’s a question of whether some will drag their feet or just not do it.”

xAI, Elon Musk’s AI lab, published a risk management framework memo in February, which acknowledged the paper and signaled that the company would “potentially utilize” certain safeguards around answering virology questions, including training Grok to decline harmful requests and applying input and output filters.

OpenAI, in an email to TIME on Monday, wrote that its newest models, the o3 and o4-mini, were deployed with an array of biological-risk related safeguards, including blocking harmful outputs. The company wrote that it ran a thousand-hour red-teaming campaign in which 98.7% of unsafe bio-related conversations were successfully flagged and blocked. “We value industry collaboration on advancing safeguards for frontier models, including in sensitive domains like virology,” a spokesperson wrote. “We continue to invest in these safeguards as capabilities grow.”

Inglesby argues that industry self-regulation is not enough, and calls for lawmakers and political leaders to strategize a policy approach to regulating AI’s bio risks. “The current situation is that the companies that are most virtuous are taking time and money to do this work, which is good for all of us, but other companies don’t have to do it,” he says. “That doesn’t make sense. It’s not good for the public to have no insights into what’s happening.”

“When a new version of an LLM is about to be released,” Inglesby adds, “there should be a requirement for that model to be evaluated to make sure it will not produce pandemic-level outcomes.”

Ria.city






Read also

UK museum displays thousands of African artefacts it knows almost nothing about

I flew Breeze Airways for the first time. Bad reviews worried me, but my experience with the low-cost airline was flawless.

Americans surge toward financial resolutions for 2026 amid household budget concerns

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости