Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025
1 2 3 4 5 6 7 8 9 10 11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Trump’s AI-Regulation Ban Is a Threat to National Security

On Monday, Donald Trump announced on Truth Social that he would soon sign an executive order prohibiting states from regulating AI. “You can’t expect a company to get 50 Approvals every time they want to do something,” the president wrote. “THAT WILL NEVER WORK!” This followed an ultimately unsuccessful attempt to slip sweeping preemption language into the National Defense Authorization Act, which would have nullified existing state laws regulating the sector.

Proponents of AI preemption equate competitiveness with deregulation, arguing that state-level guardrails hamper innovation and weaken the United States in its technological competition with China. The reality is the opposite. Today’s most serious national-security vulnerabilities involving AI stem not from too much oversight, but from the absence of it. AI systems already underpin essential functions across our economy and national-security apparatus, including airport routing, energy-grid forecasting, fraud-detection systems, real-time battlefield data integration, and an expanding range of defense-industrial-base operations. These systems create extraordinary operational advantages, but they also present concentrated, high-impact failure points.

Every one of these points is an attractive target. Adversaries know that when crucial infrastructure depends on opaque, unregulated algorithms, a single manipulated output can shut down power in an entire region, destabilize financial markets, or degrade military readiness in ways that are extremely difficult to detect in real time. The Pentagon has repeatedly warned that state-of-the-art models remain acutely vulnerable to manipulation through tactics such as data poisoning, when hostile actors corrupt the information used to train a system, or adversarial prompting, where carefully crafted inputs bypass safeguards and force models into dangerous behavior. According to U.S. intelligence reporting, China, Russia, Iran, and North Korea are investing heavily in model theft, insider recruitment, and targeted penetration of AI-development pipelines precisely because the United States has left this terrain largely undefended.

The same actors are already conducting AI-enabled disinformation and cognitive-warfare campaigns designed to distort elections, fracture alliances, and erode civic trust. In 2024 alone, foreign adversaries pushed more than 160 distinct false narratives to Americans across websites and social-media platforms, many reinforced with convincing synthetic video and audio. These campaigns thrive on gaps created by inconsistent testing and the absence of enforceable security standards.

[Matteo Wong: Chatbots are becoming really, really good criminals]

The threat is now moving from influence operations into active cyber conflict. In just the past several weeks, Google disclosed that hackers had used AI-powered malware in an active cyberattack, and Anthropic reported that its models had been used by Chinese state-backed actors to orchestrate a large-scale espionage operation with minimal human intervention. The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency.

Yet instead of confronting these harms, major technology companies are spending unprecedented sums on a coordinated lobbying campaign to avoid or overturn the very safeguards that would prevent foreseeable harms. Their strategy is straightforward: secure broad federal preemption that immobilizes the states, then delay and weaken meaningful regulation at the federal level.

This is a tragically myopic approach. Contrary to the narrative promoted by a small number of dominant firms, regulation does not have to slow innovation. Clear rules would foster growth by hardening systems against attack, reducing misuse, and ensuring that the models integrated into defense systems and public-facing platforms are robust and secure before deployment at scale.

Critics of oversight are correct that a patchwork of poorly designed laws can impede that mission. But they miss two essential points. First, competitive AI policy cannot be cordoned off from the broader systems that shape U.S. stability and resilience. The sorts of issues that state legislators are trying to tackle—scams, deepfake impersonation of public officials and candidates, AI-driven cyberattacks, whistleblower protections—are not “social issues” separate from national defense; they are integral components of it. Weaknesses in any of these areas create soft targets that foreign actors can use to disrupt essential services and destabilize institutions. These pressures accumulate over time, degrading the shared national identity and operational readiness that underpin American power. Treating these domains as disconnected from a national-security-oriented AI strategy reflects a fundamental misunderstanding of how modern competition works.

[Matteo Wong: Donald Trump is fairy-godmothering AI]

Second, states remain the country’s most effective laboratories for developing and refining policy on complex, fast-moving technologies, especially in the persistent vacuum of federal action. Congress has held scores of hearings, launched a task force, and introduced more than a hundred AI-related bills, yet has failed to pass anything approaching a comprehensive framework.

In the meantime, states are filling the void: testing approaches, debating policies, and producing real-world evidence far more quickly than Congress can. This iterative, decentralized process is exactly how the United States has historically advanced both innovation and security. Companies can choose to collaborate constructively—or, if they prefer, decide not to operate in a given state. That tension is productive. What is not productive is a top-down preemption regime written to freeze state experimentation before any federal standards exist. Federal preemption without federal action is not strategy; it is self-inflicted paralysis.

The solution to AI’s risks is not to dismantle oversight but to design the right oversight. American leadership in artificial intelligence will not be secured by weakening the few guardrails that exist. It will be secured the same way we have protected every crucial technology touching the safety, stability, and credibility of the nation: with serious rules built to withstand real adversaries operating in the real world. The United States should not be lobbied out of protecting its own future.

Ria.city






Read also

This expert shatters Trump's reasoning for troop surge that saw two shot in DC

Rovanpera halted by vertigo, Doohan linked to Haas

Whereabouts of African coup attempt leader revealed – media

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости