{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11 12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
News Every Day |

Eurasia Review Interviews: AI, Agentic Influence, And Cognitive Security In The Age Of Synthetic Consensus

An interview with Dr Benjamin Delhomme, AI policy and cognitive warfare expert, and former Senior Expert on AI at NATO StratCom COE

Artificial intelligence (AI) is frequently discussed in terms of deepfakes, synthetic media, viral deception, and broader misinformation, disinformation and malinformation (MDM) campaigns. Yet emerging research suggests that the more consequential shift may lie elsewhere, not in individual pieces of misleading content, but in the engineering of social environments themselves.

In this conversation with EurAsia Review, Dr Benjamin Delhomme examines how large language models (LLMs) and multi-agent systems are reshaping the operational logic of influence. He argues that detection-led approaches are structurally inadequate in adversarial environments, and that the core challenge is not distinguishing true from false content, but preserving the integrity of public discourse under conditions of synthetic amplification.

Drawing on his experience within the NATO ecosystem and his analysis of LLM governance, Dr Delhomme explores the limits of human oversight, the risks of identity-based lockdowns, the emerging practice of large language model grooming, and the broader implications for democratic accountability.

Rather than focusing on isolated incidents, this interview approaches influence as an environmental phenomenon in which engineered perceptions of consensus, narrative anchoring, and data governance increasingly shape the boundaries of cognitive security.

Q. Public debate around AI and democracy still tends to focus on fake content — deepfakes, misleading posts, or deceptive media. Recent research on malicious multi-agent “swarms” instead frames the threat as environmental, rooted in synthetic social dynamics that fabricate consensus and distort perceived norms. From your perspective, what fundamentally changes when the unit of manipulation is no longer a message, but the social environment itself?

Dr. Benjamin Delhomme

A: This question hits the mark: most people are still fixating on the obvious: the message that everyone can see, often amplified by bot networks.

Since LLMs entered the public space, I have focused on the impact at the bottom of the social network where human attention is highest. People face a growing threat of manipulation by AI agents within their trusted inner circles. Because this happens in a trusted environment, these agents can fabricate a consensus that looks organic.

When I was working at the NATO StratCom COE, I referred to this as the 'AI Latent Threat'. Whether we call it that or 'Malicious AI swarms' as read recently in a paper, the fundamental change remains the same: we are moving from a battle over the truth of a message to a battle over the perception of reality. You can fact-check a message, but you cannot fact-check a social environment that has been engineered to look like everyone around you has changed their minds. This results in a constant loss of control over our narratives. This is a part of what we now call Cognitive Warfare.

Q. You have consistently emphasised the importance of keeping human oversight central to how AI systems are designed and governed. In practice, what does meaningful human oversight look like when influence operations can be conducted by autonomous, coordinating agents that adapt in real time across platforms? Where do current institutional assumptions about “human-in-the-loop” models begin to break down?

A: First, we have to be careful with the word 'AI.' It has become a marketing term, effectively meaning nothing, as there is still no generally accepted definition of intelligence.

If we specifically talk about LLMs, most people still struggle with the fact that these models do not reason; they predict outcomes based on training on reasoning paths. So, where does the 'human-in-the-loop' break down? It breaks down the moment we insert these probabilistic systems into decision-making roles. LLMs are black boxes, opaque. You get an input and an output, but you cannot access the 'in-between' justification at scale.

This creates a direct contradiction with policies like the EU AI Act or NATO’s Principles of Responsible Use. You cannot claim to build a 'responsible' system if the core engine is inherently non-transparent.

My approach is not to discard these capabilities, but to use them strictly for what they are: semantic engines and reframers, not reasoning or decision-making systems. Meaningful oversight means ensuring that any autonomous decision is anchored in a transparent, traceable process that can be read by both the machine and the human. If a machine decides for us without a proper, accessible justification, we have to ask: are we still living in a democratic society?

Q. A growing body of evidence suggests that humans are not reliably able to detect AI-generated content, and that scepticism without verification can erode trust without improving discernment. If “spotting the fake” is no longer a viable civic defence at scale, what replaces it? Is the objective shifting from truth verification to protecting the resilience of the information environment itself?

A: I am going to be blunt here: The detection of AI-generated content is a lost cause, and hundreds of millions of dollars have been wasted on it because people mistake a design problem for a technical one.

You don't detect 'AI'; you detect patterns. But patterns are also learned by humans. We are seeing a dynamic where humans adopt the rhetorical style of LLMs, and LLMs adopt the rhetorical styles of Reddit or research papers. If a detector flags content as 95% AI, can you punish someone? No. In my experience as a former Senior Expert in AI within the NATO ecosystem, not a single system could reliably detect my prompting. These tools often just measure confirmation bias. People want to believe something is fake, so they accept the number they are given.

We need to broaden our paradigm beyond just 'the fake.' Influence operations operate across a full spectrum: disinformation, misinformation, and malinformation. In this framework, the distinction between true and false is only a subset of the problem. Malinformation, for example, weaponizes pure facts to deceive. The veracity of the information matters less than its ability to anchor a narrative, especially when human processing is rarely entirely rational.

Therefore, the objective must shift to narrative resilience. We need to ensure that audiences adhere to a global narrative anchored in their own values and culture. If that narrative is legitimate and established, the audience becomes resilient to manipulation, whether it is AI-generated or not. The next combat is not about policing lies—people are allowed to lie—but about countering the framing that makes a lie look like the truth.

Q. Detection technologies — whether for bots, deepfakes, or AI-generated text — are often presented as the first line of defence. Yet your work highlights the practical limits of detection, especially under adversarial conditions. In swarm-based influence operations, where agents can vary persona, timing, and behaviour to evade thresholds, what structural risks arise from over-reliance on detection-led approaches?

A: The immediate structural risk is that over-reliance on detection forces us into a binary choice. Since detection inevitably fails against adaptive swarms, the only technical 'fix' left is a complete identity lockdown of social networks, linking every account to a verified government ID.

That is the 'easy' solution, but it fundamentally breaks the dynamic of the open internet. It eliminates the pseudonymity that currently allows for open debate and protects dissenting voices.

There is no single perfect solution, but unless it is decided by the people, we must avoid measures that inadvertently lead to authoritarian control. Instead of trying to detect who is speaking (AI agent vs. human), we could use LLMs to assess what is being done, specifically, detecting the intent to influence.

If we train models to flag malicious framing rather than just fake content, we empower users to filter what they consume based on rhetoric. I’ll allow myself one theory here: this could create a new, secondary structural risk in the form of assisted Logos—logic before emotion and authority. In the persuasion realm, humans haven't changed. We are just as irrational as we were in Aristotle’s time. But if the tooling makes the population immune to rhetorical tricks, we effectively strip away the emotional appeals (Pathos) essential to domestic political discourse. We have to be ready for a society that is harder to persuade, even by its own leaders.

Q. Policy frameworks frequently assume that transparency and explainability can anchor accountability. But in multi-agent systems where influence emerges from coordination rather than intent, attribution becomes diffuse and contested. In this context, who is explainability actually for — regulators, developers, or the public — and does it meaningfully prevent harm, or mainly help assign responsibility after the fact?

A: It depends on the society you want to build. If we aim for a system where key choices are made by the people, explainability acts as a safeguard against foreign control.

If your processes are built around transparent and explainable approaches, you can always designate a responsible party. That is impossible with current LLMs because no engineer can be held accountable for the trillions of probabilistic outcomes a system might generate.

This is why I advocate for a dynamic system that digitizes knowledge at the national level. We need a repository of verified knowledge and narrative accessible by both humans and machines. By forcing companies to anchor their systems in this curated source, we would drastically reduce foreign interference and counter the narrative poisoning that affects the morale of the userbase.

Furthermore, many decision processes do not even require the generative capabilities of LLMs. They often require highly structured datasets with expert oversight. Instead, we have settled for the illusion of national knowledge through black-box systems. These are merely snapshots of unstructured data from a past moment, meaning we have effectively ceded control of the narrative.

Q. The idea of “cognitive security” is often interpreted narrowly as countering false narratives or improving fact-checking. The swarm model suggests a different centre of gravity: manufactured social proof, norm distortion, redundancy, and information overload. How do you define cognitive security in this broader sense, and what is the most persistent misunderstanding policymakers have about how influence now operates online?

A: As I mentioned earlier, the most persistent misunderstanding among policymakers is the belief that AI-generated content can be reliably detected. Once they realize that the tools themselves are undetectable, they tend to pivot toward restrictive lockdowns. This happens because they fail to separate the tool from the method.

I define cognitive security as the act of revealing the intent and framing behind a message rather than policing the source. In a swarm model, the problem is not one fake story. The real issue is fake popularity. They use constant repetition to make a tiny, fringe idea look like it is what everyone believes.

Building cognitive security means giving the population tools to see through this environmental manipulation. If we can show the public that the social agreement they see is an engineered illusion, we restore their ability to think independently.

There are certainly other ways to address this issue. Some experts will focus on media literacy or platform regulations, and those are valuable perspectives. However, I believe the true defense lies in revealing the maneuver behind the swarm so it loses its power to distort our social norms.

Q. Recent research points to early forms of what has been described as “LLM grooming” — seeding large volumes of duplicative, crawler-targeted content to contaminate future training corpora and harden fabricated narratives into subsequent model updates. How should democratic societies think about this machine-facing pathway of influence, and does it require a different governance response from traditional disinformation?

A: The biggest flaw in current AI development is the reliance on massive, uncurated data from across the internet. I mentioned this earlier when I spoke about our loss of control over the narrative. Because these models need a huge quantity of data, they are inherently vulnerable to being poisoned by the very content they crawl.

Democratic societies need to rethink the foundation of their digital systems. A nation that builds a digitized knowledge base will have the best foundation to address these future challenges. I am not talking about simply scanning documents. I mean representing information in a way that can be consumed and verified by both machines and humans.

This requires a different governance response. Traditional disinformation policies focus on the output of the machine. We need to focus on the input. By creating a curated, national-level source of truth that companies are encouraged or forced to use, we protect our future AI models from being groomed by foreign interests. Without this, we are effectively allowing our own technology to be used as a weapon against our own values.

Q. Proposed responses to AI-enabled influence range from provenance and watermarking to platform ranking reforms, decentralised identity, and more ambitious ideas around prosocial or resilience-oriented architectures. If you had to prioritise a small number of pragmatic interventions over the next few years, what would matter most at the model layer, the platform layer, and the institutional layer — and where do commercial incentives pose the biggest obstacle?

A: At the institutional layer, we must build digital environments where information is accessed with complete trust. This requires transparent processes that allow for verification if needed, effectively offloading the cognitive load from the citizen.

At the platform layer, one priority could be a move away from 'identity policing' and toward transparency of intent. Platforms could provide users with tools that flag the rhetorical framing of content. Instead of just trying to delete bots, they could empower users to recognize when a narrative is being artificially pushed or synthetically amplified. This approach would offer a way to manage the information environment without resorting to total surveillance.

The biggest obstacle is found at the model and commercial layers. After massive investments in development, companies feel pressured to justify the cost by attributing reasoning capabilities to these models that they simply do not possess. This has led to a concerning push to integrate these systems into education. We are giving children tools that are not based on curated data, but on a 'reasoning' narrative designed to sell a product.

I am not suggesting we discard these new capabilities. I am optimistic about the future of AI, and we may eventually see true forms of machine reasoning. However, even if we do, we must maintain accountability. If we integrate unaccountable systems into decision-making, we will be forced to redefine the very concept of democracy. Our priority must be ensuring that any tool used to shape our society is anchored in a traceable and curated foundation.

Ria.city






Read also

Kelsea Ballerini fuels cheating rumors with cryptic Instagram comment after Chase Stokes breakup

Jharkhand Assembly To Go Digital: Are Legislators Ready for the Paperless Revolution?

Salesforce sent emails scolding employees for 'absence' at a company event after its CEO joked about ICE

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости