{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

How to Make AI Tell the Truth

Imagine the following: a dozen senior government officials are gathered around a wall-sized screen in Washington. Its background is matte black, and lines of green text scroll steadily across the display, punctuated by amber alerts signalling elevated concern.

They are watching an artificial intelligence (AI) system confirm that a carefully negotiated ceasefire has just been violated. According to the system, adversary forces crossed the demarcation line overnight. Sensor data suggests artillery movement. Video appears to show explosions near civilian infrastructure. Multiple verification streams converge on the same conclusion. 

To the government officials, the ceasefire violation almost certainly means the collapse of delicate negotiations. Military deployments and urgent diplomatic outreach may follow. But what if the material supplied is faked by AI? It is exactly the kind of outcome that the West’s adversaries would welcome, and precisely the preparatory work they are now conducting.  

The scenario illustrates a growing danger in modern information warfare. As governments rely more heavily on AI to interpret events and assess risk, they also become more vulnerable to misinformation aimed not at people but at the systems that they rely on when making decisions. Russia, China, and Iran all understand these vulnerabilities and are working to exploit them.  

The core problem is simple. The systems we use to verify whether information is true are themselves becoming targets.  

Russia, in particular, has invested heavily in making sure that the unreliability increases. If the imaginary scene you just read was real, you could count on adversaries swamping the artificial intelligence vehicles with false information about troop movements, destruction of civilian infrastructure, and artillery positioning. 

To better understand this risk, researchers have created a virtual petri dish to observe how AI agents interact with one another. The agents are known as Moltbots, from the word “molt,” reflecting their ability to shed identities and obscure their past behavior. 

When Moltbots are placed together inside OpenClaw, an experimental online forum launched this year, humans are allowed to observe but not participate. Only AI agents read, post, comment, and vote. It is a controlled environment where machines interact solely with machines. 

Inside OpenClaw, the bots exchange information, evaluate one another’s output, and converge on conclusions based entirely on internal logic. They collaborate, disagree, and reinforce patterns without understanding meaning or consequence. As one engineer involved in early observation put it, they do not know what is true; they know what is repeated. 

Too often, the bots propagated misinformation similar to the imagined ceasefire scenario above. When a claim appeared often enough, repetition itself became proof. The system concluded that what it encountered most frequently must be true. 

This is precisely the weakness Russian information warfare is designed to exploit. Rather than relying on obvious lies, it floods the information environment with plausible distortions at an overwhelming scale. Gordon Crovitz, co-founder of NewsGuard, has documented how false claims have been replicated across hundreds of sites, in dozens of languages, at volumes no human operation could sustain. Last year alone, a Russian disinformation network published more than six million coordinated articles. 

China is up to something similar. The Estonian intelligence service reported that the Chinese AI model DeepSeek was effectively a state propaganda outlet, suppressing facts and spreading untruth. 

Faced with this volume, systems designed to determine truth are fooled. Their training treats frequency as credibility. They can end up endorsing and validating conclusions that run directly counter to Western interests. 

Researchers refer to this phenomenon as data poisoning. 

OpenClaw shows how dangerous data poisoning becomes when the audience for misinformation is no longer human readers, but other AI agents. An AI system that absorbs distorted information can generate alerts, briefings, and recommendations without anyone stopping to question the assumptions underneath. 

In the imagined ceasefire scenario, human judgment might still apply the brakes. People could ask whether the information was outdated, whether the sources were independent, or whether the apparent consensus was driven by thousands of AI-generated articles designed to mislead. Humans pause. They question. They look for context. 

In machine-to-machine systems, that buffer disappears. 

The lesson from OpenClaw is straightforward. Securing AI systems means protecting the conditions under which they decide what counts as verified. 

OpenClaw exposes both danger and opportunity. These machine-only environments show how easily verification can be distorted, but they also give us a chance to identify vulnerabilities before adversaries exploit them at scale. 

As AI agents increasingly shape intelligence, diplomacy, and military readiness, getting information right is no longer a technical concern. It is a national security issue, and an urgent one. 

Three steps are needed. 

  • First, verification systems must be treated as critical infrastructure, and in view of the harm they can do when operating on misinformation, they should be treated with even greater seriousness than those we apply  to banks, elections, and power grids. 
  • Second, automated systems must be trained to do more than treat repetition as confirmation. 
  • Third, governments and technology companies must plan for information warfare in advance, rather than responding after damage has already been done. 

The next front line of information warfare is not aimed at voters or readers. It is aimed at the systems we increasingly rely on to tell us what is true. We still have one advantage: time. History suggests it rarely grants extensions. 

Amyn Jan, Founder of AJ Emtech LLC, has served as the Department of War’s Chief AI Architect. He focuses on integrating artificial intelligence across complex enterprise systems. 

Mitzi Perdue, is a CEPA Senior Fellow who frequently writes on AI. 

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.

Tech 2030

A Roadmap for Europe-US Tech Cooperation

Learn More
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More

The post How to Make AI Tell the Truth appeared first on CEPA.

Ria.city






Read also

Habs Mailbag: Goaltending, defence and jam keys to playoff success for Canadiens

"We are just displaced and we have to go back" : Lebanese return to southern villages

Baseball Glance

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости