Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

How AI Is Reshaping Science’s Most Trusted Tool

When scientists and regulators need clear answers to health risks—such as whether Tylenol causes autism (it doesn’t)—they typically turn to systematic reviews, widely regarded as medicine’s gold standard of evidence. Many aspects of our lives are governed by findings from these reviews: from the drugs prescribed by physicians, to vaccine mandates, to environmental policy. The problem is that conducting reviews is notoriously slow and labor-intensive. New artificial intelligence (AI) tools are poised to substantially accelerate this process, transforming how quickly scientific evidence filters into society. Handled responsibly, this could improve care and save lives—but only if it preserves the standards that make these reviews trustworthy.

Systematic reviews are a method of answering a scientific question by collecting and evaluating all the relevant studies. They’re called “systematic” because (in contrast to the single-author, state-of-the-field summaries they were designed to replace) there are strict standards for how to conduct them and how to report what the team finds. In principle, this makes the process transparent and reproducible: every step—from how studies are searched for and selected, to how their quality is assessed and their results synthesized—is clearly documented. This makes it harder to cherry-pick studies to support a preferred conclusion, and easier to identify bias.

In practice, this is an arduous task. Once the research team has settled on a precise question, a typical review requires two or three researchers to screen tens of thousands of papers by reading their titles and abstracts. Finding the relevant studies requires special expertise: reviewers must know where and how to search in order to not miss important findings. Papers that look promising are then set aside for reviewers to read in full and decide whether to include that paper or not. Data from the selected papers are methodically extracted according to a plan determined beforehand, and the evidence is then synthesized.

In medicine, this process typically takes between ten and fourteen months. But they sometimes take much longer: one reviewer told me she was helping with one that had been four years in the making. These delays come with costs: In the early stages of the Covid-19 pandemic, even accelerated reviews were frequently outdated by the time they were published, meaning that clinicians and policymakers were often making decisions based on evidence syntheses that lagged behind the rapidly changing science.

The problem is getting worse as the number of published studies increases; there are more studies to screen, assess, and analyze. Modern science has thus produced a kind of paradox: the more research exists on a topic, the harder it becomes to say what the research as a whole implies.

“How we do systematic reviews needs to change,” Ella Flemyng, Head of Editorial Policy and Research Integrity at Cochrane—one of the world’s leading evidence-synthesis organizations—told me. “It’s not sustainable going forward.” That’s where AI comes in.


The most time-consuming part of a review—screening many thousands of abstracts—is a prime candidate for automation. It is also the sort of task that AI models are, at least in principle, good for.

A fairly narrow kind of AI is already baked into software reviewers use. Some tools, for instance, help reviewers by showing them the most likely relevant titles and abstracts first, and pushing the rest further down the stack. But the system doesn’t decide which studies are potentially relevant; it only sets the order of what the reviewer sees. The decision—and the reasoning behind it—still rests with the reviewer.

A new wave of tools, based on generative AI, aims to go beyond sorting papers and to automate various stages of the reviewing process. Some products, such as Elicit and SciSpace, feel like the chatbots we are so accustomed to: users can type a question, and the system returns a summary of the research (with sources). Effectively, these tools are trying to handle all aspects of the review—the search, inclusion, and synthesis. Others, like Nested Knowledge, are more constrained, and look more like the specialized software reviewers already trust, just with AI features layered in. In both cases, the promise is that work that currently takes months could soon be done in minutes or hours.

Now, a process typically filled with red tape feels like a scientific wild west. Generative AI-based tools are being heavily marketed, while strict guidelines for how to integrate them into the review pipeline have lagged behind. “Everything is moving very, very fast” said Kristen Scotti, STEM Librarian at Carnegie Mellon. “A lot of the recommendations are not out yet, so people are just kind of flopping around.”

An increasing number of reviews are being conducted with these new tools. So far, these haven’t been published in the most prestigious journals, where they are likely to make the most impact, partly because there were no widely accepted standards for what responsible AI use looks like.

This has begun to change. In November 2025, the world’s four major evidence synthesis organizations—Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence—published a position statement called the Responsible Use of AI In Evidence Synthesis (RAISE) to help guide the use of AI in reviews. The tone is cautious, reminding reviewers that they remain responsible for the output and that any tool has to be rigorously tested before it becomes part of a review. Flemyng, who is also an author of the statement, told me that “a lot of these [AI] uses in reviews are still exploratory. Or if you’re using them, you need to validate their use in the specific review before you actually use them. We don’t have the evidence base for a blanket roll-out for any of these tools.”

The RAISE recommendations fall short of the kinds of procedural detail that reviewers are used to. “That guidance is very general: it tells people what to do, without telling them how to do it.” Farhad Shokraneh, a researcher specializing in systematic reviews at the universities of Oxford and Bristol, told me.

Still, this statement offers a green light for integrating AI in reviewing workflows. With the backing of the leading synthesis organizations, we are likely to soon see many more—and better—reviews being published that use these tools.

The benefits will not just be (much) faster reviews. Evidence could also be updated more quickly as new studies come in, turning today’s slow, static reviews into something closer to living, continually refreshed summaries. Already, some funders, like the London-based health research charity Wellcome Trust, are dreaming of “real-time aggregation of scientific data.”

Researchers may also be able to include studies in more languages. This is a hurdle for contemporary reviews, said Margaret Foster, Director of Evidence Synthesis Services at the Medical Sciences Library at Texas A&M University. “I’ll have students who want to look at acupuncture for nausea caused by chemotherapy,” she said. “And we know there’s a lot of research coming out of China. It would be great if we could access that research.”

In an optimistic scenario, anyone could run reliable systematic reviews on their computer by simply asking the model a question and letting it filter through the literature and summarize it succinctly. But the veteran reviewers I interviewed unanimously agreed: we’re far from that point. Many are skeptical we’ll ever get there.


For starters, many of the AI-based tools are not reproducible, lacking one the fundamental characteristics of science. Multiple studies have found that prompting these models with the same query—say, “what is the effect of acupuncture on chemotherapy-induced nausea?”—at different times may result in the model selecting different studies and producing different results. Even small changes in the phrasing of the prompt can likewise yield dramatically different outputs.

These systems are black boxes: the procedure that generated an output can be difficult (and sometimes practically impossible) to trace. Together, a lack of reproducibility and transparency is a problem for evidence-based policy. It could mean recommending a cancer treatment based on a review that no one can independently verify or fully understand.

Another fundamental problem is that “AI gives you false sense that everything has been searched,” Foster told me. “But these AI tools don’t have access to all the databases.” They are often trained on only freely available scientific papers, which represents only a slice of the scientific literature. Having access to the best databases is essential for having an accurate picture of the evidence. But is also prohibitively expensive.

These concerns are even more pressing when public information itself is in flux. The Trump administration has been removing datasets and studies from the national libraries. If AI-based tools are trained on or search uneven scientific records, then we lose the original motivation for systematic reviews: namely, of synthesizing all the relevant evidence on a question.

There are also equity issues. Some tools are only available in certain countries, or behind paywalls, raising the prospect of a widening gap between high-income countries that can afford the newest systems and manage the largest databases, and low- and middle-income countries that cannot.

Implementing AI tools in reviews without addressing these risks could be disastrous, mixing an illusion of scientific credibility with poor-quality outputs. In turn, this could further erode public trust in scientists and scientific inquiry—including confidence in the safety of vaccine recommendations—where it’s already disturbingly low.

But there are ways (at least in theory) of mitigating the risks, even if doing so will require substantial work: the AI tools could be built to support a high level of reproducibility and transparency, and broader pushes for open databases could help ensure that everyone has access to the full evidential record.


I asked Shokraneh whether he thinks these tools will eventually replace him as a reviewer. “I hope so,” he said. “I may end up in the street because I will have no job. But that’s okay if that means my mom will live ten years longer.”

I hope for a future where good evidence is produced faster and more people have access to it—where I can ask a detailed question about health or economics to an app on my phone and trust that it returns a scientifically sound synthesis of the relevant available research. But we are not there yet. And until scientists have rigorously evaluated current tools, placing too much trust in their output isn’t a good idea.

Still, when I heard that Robert F. Kennedy Jr.—the nation’s Health and Human Services Secretary— claimed that the statement “Vaccines do not cause autism” is “not supported by science”, I decided to check with Elicit. In a long, scientific-looking report, it told me that studies “consistently found no association between vaccination and autism spectrum disorders.” I rephrased the question multiple times and the answer remained the same. Days later, on January 5 2026, the federal childhood immunization schedule was revised, narrowing routine recommendations for several vaccines. 

AI evidence synthesis tools still have many limitations. But maybe some policymakers could already benefit from using them. 

Ria.city






Read also

Portland DA cracks down on drug crimes as Seattle pulls back on enforcement

I'm a Princeton-educated engineer, but I left the corporate world to pursue a better work-life balance. Doing so helped me become a millionaire by 30.

Alexa.com brings Alexa+ to your browser

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости