{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026 May 2026
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

The AI Policy Questions in the Underlying Issues of the Anthropic-Pentagon Case

Jennifer Huddleston

The Department of War’s decision to label Anthropic a supply chain risk raises significant concerns related to government power over private business and the First Amendment. The litigation involving the designation is rightly focused on questions of due process and the First Amendment. As Judge Lin notes in the preliminary injunction against the government, disputes over potential government surveillance, autonomous lethal weapons, and the extent of all lawful uses are “not for this Court to answer in this litigation.”

The court is correct. Such questions are clearly within Congress’s authority under Article I of the Constitution. The facts underlying this dispute underscore the need for an appropriate policy framework to support innovation and provide clear guidance to innovators, citizens, and the government for protecting civil rights and civil liberties.

Civil Liberties, Domestic Surveillance, and AI

AI has fantastic potential to analyze large amounts of data better than any human reviewer could. As a technology, such potential gives it potential beneficial use cases across both the public and the private sectors. This enables it to better detect cancer, predict and respond to national disasters, and help Americans start new businesses. But as Uncle Ben says in the 2002 version of Spiderman, “With great power comes great responsibility.” 

Given the potential power of this technology, there are scenarios where their potential impact on civil liberties must be considered. When it comes to government use of these powerful tools, we must ensure that appropriate guardrails are in place to protect civil liberties while minimizing the potential impact on beneficial uses and the overall development of the technology.

I would suggest four common-sense steps could be taken in this regard:

  1. Provide clarity and guardrails around what government data can be used for and how that data can be used.
  2. Close existing loopholes that allow the government to buy bulk data via data brokers.
  3. Engage on policy reforms for existing programs like FISA Section 702 that could be abused or expanded to result in domestic surveillance, particularly in the AI era.
  4. Establish appropriate approval processes, such as subpoenas and warrants for use of technology by law enforcement or other agencies, in line with existing Fourth Amendment doctrine and other civil liberties.

None of these restricts the development of AI itself but rather restrains the government in its deployment or provides clarity around existing law in light of new technology to prevent potential abuse. Additionally, these data practices are a wise move beyond just AI technology and would be technology-neutral for both current and future technologies. 

The case for FISA reform and other surveillance-related protections has been made well elsewhere by my colleague Patrick Eddington. While this issue is particularly important given the debate around reauthorization, rather than engage in a full discussion here, I would recommend those interested in the topic consider his robust body of work on the topic.

Improving Data Governance and Protection

While an individual can decide whether or not they want to give their data to a particular private company, there are many situations where that is not the case with the government. As a result, particularly given the rise of AI, many are concerned about the way data could be abused by bad actors or the government itself. 

Existing laws on census and statistical data provide examples of what guardrails for the government’s own data could look like. Under CIPSEA, enacted in 2002, federal government data collected for statistical purposes may not be disclosed in a way that would identify an individual without that individual’s consent. There are also protections around various data sets, like Census data and other statistics, that limit their use by the government to the purposes for which they were collected.

Now, government data sets can have positive use cases for technological development, the public, and researchers. The above should not be seen to limit such availability. Instead, such guardrails could help ensure that data collected by one agency for a legitimate purpose cannot be abused by another agency for an unintended purpose. At the same time, this can still allow the use of such data in the public interest, including by those developing AI technologies without access to the resources of large companies.

Clarifying Fourth Amendment Protections in the AI Era

Technology can be incredibly useful, including in law enforcement. However, the use of certain technologies by government actors without appropriate restrictions can raise civil liberty concerns. These concerns are not unique to AI, but they have amplified the need for clear guardrails in the digital age.

Government attempts to circumvent existing restrictions in order to obtain data they would not otherwise have access to predate AI. But the potential impact of AI makes it more important to ensure the loopholes that enable such circumvention are closed. One notable example of this is the “data broker loophole,” which allows large amounts of data not otherwise obtainable by the government to be bought from data brokers. Such acquisition of data goes beyond the scope of a single investigation to include the data of those not at all connected to the underlying law enforcement concern. 

These checks should also be in place for the use of certain technologies that can be helpful but also create additional civil liberty concerns. For example, existing laws require court-approved warrants supported by sufficient probable cause for monitoring emails or conducting wiretaps. Such guardrails lessen the risk of abuse while ensuring law enforcement can access these technologies in appropriate cases. This sort of procedural requirement applied to the data broker loophole would protect civil liberties and help ensure critical evidence is properly obtained and admissible. Several states have taken various steps to respond to such issues regarding law enforcement’s use of facial recognition to ensure a balance. In light of ongoing policy concerns, there should be consideration of appropriate limitations on federal law enforcement that could be codified to protect civil liberties from technological uses that would not be possible in an analog era.

Autonomous Lethal Weapons

For many, AI raises concerns about the potential of killer robots akin to the movie Terminator. As early as 2018, thousands of leading AI scientists called to pre-emptively ban the use of the technology in autonomous lethal weapons or the deployment of weapons of mass destruction. Policy on this topic must balance a range of concerns while being realistic about where technology currently is. 

As Anthropic indicated in its statement on February 26, 2026, the reliability of the technology is such that there are still significant concerns over whether it would be able to power autonomous lethal weapons without putting civilians or even America’s own warfighters at risk. Humans are also fallible, of course. But for such a significant decision, such as the taking of a human life, it is reasonable to expect that an accountable human is ultimately involved in the decision. 

Policymakers have considered this in some defense decisions. The 2025 National Defense Authorization Act (NDAA) included language to preserve “meaningful human control” in nuclear launch decisions.

Policy must not, however, be so broad as to ban the use of AI more generally in defense situations. For example, AI could play roles in alerting potential security breaches, ensuring safe handling and manufacturing through automation, and analyzing potential combat needs. Many technologies like drones or autonomous vehicles rely on AI applications and may be used in defense or combat in the future, like civilian usage. With this in mind, a policy framework to address the legitimate concerns around potential autonomous lethal weapons should focus on the specific potential harm of the decision to use lethal force and the accompanying risk to civilians or others. An overly broad policy could prevent appropriate and even already common uses of automation and AI that are important to maintain readiness and success for defense.

Conclusion

There are significant policy questions raised by the situation between Anthropic and the Pentagon. These policy questions, however, should be appropriately decided not by bureaucrats or the courts, but by Congress. While we are still in the early days of AI, these questions are becoming increasingly pressing for innovators, consumers, and even the government itself.

Ria.city






Read also

Trump’s South Asia envoy seeks meeting with reluctant Nepal PM – media

Hit by car, delivery agent dies after falling from flyover in Delhi

Industry experts just named the best bourbon in a blind tasting—and it’s under $70

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости