{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

The one question everyone should be asking after OpenAI’s deal with the Pentagon

0
Vox
Sam Altman, CEO of OpenAI, swooped in to sign a deal with the Pentagon right after Anthropic was blacklisted. | Al Drago/Bloomberg via Getty Images

American AI companies love to say that the US must win the AI arms race, or China will. 

Anthropic, OpenAI, Google, Microsoft, and Meta have all invoked the threat of a Chinese victory to justify speeding ahead on AI development, seemingly no matter what. The argument is simple: Whoever pulls ahead in building the most powerful AI could be the global superpower for a long, long time. China’s authoritarian government suppresses dissent, surveils its citizens, and answers to no one. We cannot let that model win.

And to be clear — we shouldn’t. The Chinese Communist Party’s human rights abuses are real and horrific, and AI technologies like facial recognition have made them worse. We should be scared of a scenario where that becomes the norm. 

But what if authoritarian rule that uses tech to surveil people in alarming ways is already becoming the norm in the US? If America is shape-shifting into the bogeyman it critiques, what happens to the case for racing ahead on AI?

This is the question everyone should be asking now that the Pentagon has blacklisted Anthropic — and embraced its rival, ChatGPT-maker OpenAI, which was more willing to accede to its demands. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent. Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic. They do not have any editorial input into our content.)

The US Department of Defense is already using AI powered by private companies for everything from logistics to intelligence analysis. That has included a $200 million contract with Anthropic, which makes the chatbot Claude. But after the US used Claude in its January raid in Venezuela, a dispute erupted between Anthropic and the Pentagon.

The two redlines Anthropic insisted on in its contract with the Defense Department — that its AI shouldn’t be used for mass domestic surveillance or fully autonomous weapons — represent such fundamental rights that they should have been uncontroversial. And yet the Pentagon threatened that it would either force Anthropic to submit to full and unfettered use of its tech, or else name Anthropic a supply chain risk, which would mean that any external company that also works with the US military would have to swear off using Anthropic’s AI for related work. 

When Anthropic didn’t back down on its requirements, Defense Secretary Pete Hegseth followed through on the latter threat — an unprecedented move, given that the designation has previously been reserved for foreign adversaries like China’s Huawei, not American companies.

As a journalist who’s spent years reporting on China’s use of AI to surveil and repress Uyghur Muslims, learning of the Pentagon’s threats reminded me of nothing so much as China’s own policy of “military-civil fusion.” That policy involves compelling private tech companies to make their innovations available to the military, whether they want to or not. Either wittingly or unwittingly, Hegseth seemed to be borrowing directly from Beijing’s playbook.

“The Pentagon’s threats against Anthropic copy the worst aspects of China’s military-civil fusion strategy,” Jeffrey Ding, who teaches political science at George Washington University and specializes in China’s AI ecosystem, told me. “China’s actions to force high-tech private companies into military obligations may lead to short-term technology transfer, but it undermines the trust necessary for long-term partnerships between the commercial and defense sectors.”

To be clear, America is not the same as China. After all, Anthropic was able to freely voice its opposition to the Pentagon’s demands, and the company says it’ll sue the US government over the blacklisting, which would be unthinkable for a Chinese firm in the same situation. But the US government’s embrace of authoritarian conduct is undeniable. 

“Racing” to build the most powerful AI was always a dangerous game; even AI experts building these systems don’t understand how they work, and the systems often don’t behave as intended. But it’s even more dangerous to try building that powerful AI under the Trump administration, which is increasingly proving itself happy to bully American companies in order to preserve the option of using AI for mass surveillance and weapons that kill people with no human oversight.

Those who are still bought in on the idea that the US must win the AI race at all costs should now be asking: What’s the point of the US winning if the government is going to create a China-like surveillance state anyway?

At least one of the major AI companies is not taking this question seriously.  

What’s really in OpenAI’s deal with the Pentagon — and why many are now boycotting ChatGPT

OpenAI announced that it had struck a deal to deploy its AI models in the Pentagon’s classified network — just hours after the Pentagon blacklisted Anthropic. 

This was extremely confusing.

Sam Altman, the CEO of OpenAI, had claimed that he shares Anthropic’s red lines: no mass surveillance of Americans and no fully autonomous weapons. Yet somehow Altman managed to cut a deal that, by his account, didn’t compromise either of them. Apparently, the Pentagon had no problem with that.

How is that possible? Why would the Pentagon agree to OpenAI’s terms if they’re really the same as Anthropic’s?

The answer is that they’re not the same. Unlike Anthropic, OpenAI acceded to a key demand of the Pentagon’s — that its AI systems can be used for “all lawful purposes.” On the face of it, that sounds innocuous: If some type of surveillance is legal, then it can’t be that bad, right?

Wrong. What many Americans don’t know is that the law just has not come close to catching up to new AI technology and what it makes possible. Currently, the law does not forbid the government from buying up your data that’s been collected by private firms. Before advanced AI, the government couldn’t do all that much with this glut of information because it was just too difficult to analyze all of it. Now, AI makes it possible to analyze data en masse — think geolocation, web browsing data, or credit card information — which could enable the government to create predictive portraits of everyone’s life. The average citizen would intuitively categorize this as “mass surveillance,” yet it technically complies with existing laws.

For Anthropic, the collection and analysis of this sort of data on Americans was a bridge too far. This was reportedly the main sticking point in its negotiations with the Pentagon.

Meanwhile, take a look at an excerpt of OpenAI’s contract with the Pentagon, and you can see in the first sentence that it is allowing the Pentagon to use its AI for “all lawful purposes”: 

You might be wondering: What are all those other clauses that appear after the first sentence? Do they mean your fundamental rights will be protected?

Altman and his colleagues certainly tried to give that impression. But many experts have pointed out that they don’t guarantee that at all. As one University of Minnesota law professor wrote: 

In fact, as several observers noted, the contract clauses call to mind what an Anthropic spokeswoman said about updated wording it had received from the Department of Defense at a late stage in their negotiations: “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” she said. 

OpenAI did get some assurances into the contract; the company’s blog post says it’ll have the ability to build in technical guardrails to try to ensure its own red lines are respected, and it’ll have “OpenAI engineers helping the government, with cleared safety and alignment researchers in the loop.” But it’s unclear how much good that’ll do, given that the impact of technical safeguards is limited and the language doesn’t guarantee a human in the loop when it comes to autonomous weapons.

“In terms of safety guardrails for ‘high-stake decisions’ or surveillance, the existing guardrails for generative AI are deeply lacking, and it has been shown how easily compromised they are, intentionally or inadvertently,” Heidy Khlaaf, the chief AI scientist at the nonprofit AI Now Institute, told me. “It’s highly doubtful that if they cannot guard their systems against benign cases, they’d be able to do so for complex military and surveillance operations.”

What’s more, “Nothing in the contractual language released up to this point seems to provide enforceable red lines beyond having a ‘lawful purpose,’” said Samir Jain, the vice president of policy at the Center for Democracy & Technology. “Embedding OpenAI engineers does not solve the problem. Even if they are able to identify and flag a concern, at most, they might alert the company, but absent a contractual prohibition, the company could not have any right to require the Pentagon to halt the activity at issue.” 

OpenAI and Anthropic did not respond to requests for comment. OpenAI later said it was amending the contract to add more protections around surveillance.

Perhaps if Altman did not already have a reputation for misleading people with vague or ambiguous language, AI watchers would be less alarmed. But he does have that reputation. When the OpenAI board tried to fire Altman in 2023, it famously said he was “not consistently candid in his communications,” which sounds like board-speak for “lying.” Others with inside knowledge of the company have likewise described duplicity. 

​​Even Leo Gao, a research scientist employed by OpenAI, posted:

For now, only a minuscule portion of OpenAI’s contract with the Pentagon has been made public, so we can’t say for certain what guarantees it does or doesn’t contain. And some aspects of this story remain murky. How much of the Pentagon’s decision to replace Anthropic with OpenAI was due to the fact that OpenAI’s leaders have donated millions of dollars to support President Donald Trump while Anthropic’s Amodei has refused to bankroll him or give the Pentagon carte blanche with the company’s AI, earning him Hegseth’s dislike and Trump’s insistence that he leads “A RADICAL LEFT, WOKE COMPANY”?

While these uncertainties linger, public mood has turned against OpenAI with nearly the speed of the tech itself. A public campaign called QuitGPT launched last month and has gained immense traction since the Pentagon clash, urging those who feel betrayed by OpenAI to boycott ChatGPT. By the group’s count, over 1.5 million people have already taken action as part of the boycott. 

It’s no coincidence that Anthropic’s chatbot, Claude, became the No. 1 most downloaded app in the App Store over the weekend, with users seeing it as a better alternative to ChatGPT.

Historian and bestselling author Rutger Bregman, who has studied the boycott movements of the past, was one of those who felt fired up upon seeing the QuitGPT campaign. He has since become its informal spokesperson. 

“What effective boycotts have in common, in my view, is that they’re narrow, they’re targeted, and they’re easy,” Bregman told me. “I looked at the ChatGPT boycott and was like: This is exactly it! This is the first opportunity to start a massive consumer boycott in the AI era, and to send an incredibly powerful signal to the whole ecosystem, saying, ‘Behave, or you could be next.’” He suggests switching over to the chatbot of any other AI company, except Elon Musk’s Grok.

Mind you, it’s worth noting that Anthropic itself is no dove. After all, the company has a deal with the AI software and data analytics company Palantir, which is infamous for powering operations of Immigration and Customs Enforcement (ICE). Anthropic is not opposed to all forms of mass surveillance, nor does it seem to be categorically opposed to using its AI to power autonomous weapons (its current refusal is based on the fact that its AI systems can’t yet be trusted to do that reliably). What’s more, it recently dropped its key promise not to release AI models above certain capability thresholds unless it can guarantee robust safety measures for them in advance. And as an employee of Anthropic (or Ant, as it’s sometimes known) pointed out, the company was happy to sign a contract with the Department of Defense in the first place:

Still, many believe that if you’re going to use a chatbot, Anthropic’s Claude is morally preferable to OpenAI’s ChatGPT — especially in light of the recent clash at the Pentagon.

What else can be done to ensure AI isn’t used for mass surveillance or fully autonomous weapons?

There was a time when some AI experts urged an alternative to a US-China AI arms race: What if Americans who care about AI safety tried to coordinate with their Chinese counterparts, engaging in diplomacy that could ensure a safer future for everybody?

But that was a couple of years ago — eons, in the world of AI development. It’s rarer to hear that option floated these days.

Some experts have been calling for an international treaty. A dozen Nobel laureates backed the Global Call for AI Red Lines, which was presented at the UN General Assembly last September. But so far, a multilateral agreement hasn’t materialized.

In the meantime, another option is gaining prominence: solidarity amongst the tech workers at the major AI companies.

An open letter titled “We Will Not Be Divided” has garnered more than 900 signatures from employees at OpenAI and Google over the past few days. Referring to the Pentagon, the letter says, “They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure.” Specifically, the letter urges OpenAI and Google leadership to “stand together” to continue to refuse their AI systems to be used for domestic mass surveillance or fully autonomous weapons.

Another open letter — which has over 175 signatories, including founders, executives, engineers, and investors from across the US tech industry, including OpenAI employees — urges the Department of Defense to withdraw the supply chain risk designation against Anthropic and stop retaliating against American companies. It also urges Congress “to examine whether the use of these extraordinary authorities against an American technology company is appropriate” — a tactful way of suggesting, perhaps, that the Pentagon’s moves were an abuse of power. 

Federal regulations and global treaties would be a much stronger defense against unsafe and unethical AI use than relying on the goodwill of individual technologists. But for the moment, cross-company coordination is at least a start — a way to push back against Pentagon pressure that would lead, if left unchecked, to something America keeps insisting it’s nothing like.

Ria.city






Read also

Amazon data centers damaged by drone strikes in the Middle East

Hühner-Abenteuer meistern Sammle Münzen und überlebe den Verkehr auf der Chicken Road für endlose Sp

T-Mobile is offering the iPhone 17e and iPad Air for free — heres how to qualify

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости