{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

OpenAI CEO Sam Altman defends decision to strike Pentagon deal after Anthropic blacklisting, admits ‘optics don’t look good’

OpenAI CEO Sam Altman and other senior executives took to social media over the weekend to defend their decision, announced on Friday, to strike a deal with the Department of War to allow the company’s models to be used in classified military networks. The deal came hours after archrival Anthropic turned down a similar agreement with the Pentagon and the Trump administration said it was labeling Anthropic a “supply-chain risk.”

OpenAI faced vocal backlash for agreeing to the Pentagon deal after Altman had earlier in the week voiced support for Anthropic’s position that it would not accept a Pentagon contract that did not contain explicit prohibitions on its AI technology being used for mass surveillance of U.S. citizens or being incorporated into autonomous weapons that can make a decision to strike targets without human oversight.

Some of these critics have even started a campaign to persuade ChatGPT users to stop using that AI model and switch to Anthropic’s Claude chatbot. There was some evidence the campaign was having an effect, too: Claude surged past ChatGPT to become the most downloaded free app in Apple’s App Store. The sidewalk outside OpenAI’s offices in San Francisco was also covered with chalk graffiti attacking its decision to cut a deal with the Pentagon, while graffiti outside Anthropic’s offices largely praised its decision to refuse a contract that did not include prohibitions on the use of its AI models for mass surveillance and autonomous weapons.

Some of Altman’s and OpenAI’s social media push over the weekend seemed aimed at quelling concerns among the company’s own employees over the Pentagon contract. Many rank-and-file OpenAI employees had signed an open letter last week supporting Anthropic’s refusal to accede to the Pentagon’s demands and opposing its decision to designate Anthropic a supply-chain risk. (Altman also said over the weekend that he disagreed with the supply-chain risk designation.)

And at least one OpenAI employee publicly questioned whether the company’s contract with the Pentagon provided robust safeguards. Leo Gao, an OpenAI employee who works on making sure increasingly powerful AI models stay aligned with user intentions and human values, criticized his employer on X for agreeing to let the DOW use its technology for “all lawful purposes” and then engaging in what Gao called “window dressing” to make it seem like there were further restrictions on what the Pentagon could do with OpenAI’s GPT models.

Altman admitted in an “Ask Me Anything” session on social media platform X on Saturday night that its deal with the Pentagon “was definitely rushed, and the optics don’t look good.”

But he insisted that OpenAI moved quickly to make the deal because it wanted to de-escalate the increasingly heated situation between the U.S. military and Anthropic. The fight potentially threatened to damage the AI industry as a whole, in part by raising the prospect of the U.S. government nationalizing an AI lab or at least using its power to coerce a private company to deliver technology on its preferred terms.

“If we are right and this does lead to a de-escalation between the DOW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry,” Altman said. “If not, we will continue to be characterized as rushed and uncareful.”

He added that “a good relationship between the government and the companies developing this technology is critical over the next couple of years.”

And he said he was opposed to Anthropic being labeled a supply-chain risk. “Enforcing the [supply-chain risk] designation on Anthropic would be very bad for our industry and our country,” Altman said. “To say it very clearly: I think this is a very bad decision from the DOW, and I hope they reverse it. If we take heat for strongly criticizing it, so be it.”

OpenAI said that it had found a compromise approach that preserved the same limitations while also acceding to the military’s wish that it not have contractual constraints on how it uses the AI tech it purchases. The company said that limits on how its AI can be used rest on both references to existing law that it has put in the DOW contract and technical limitations on what its AI models will be able to do.

It said the DOW agreed to let it build these technical limitations. The technical limitations will include systems that would classify any of the prompts DOW users feed OpenAI’s models and refuse any that the classifier deems might violate OpenAI’s redlines. It also may include fine-tuning of OpenAI’s models so that they won’t easily comply with instructions that violate the two redlines. 

OpenAI says its contract attempts to bind Pentagon to current law

OpenAI published a portion of its contract with the DOW in which it said it agreed that its technology could be used “for all lawful purposes” but which also included specific references to existing U.S. laws and Department of War policy documents that establish limitations on the surveillance of U.S. citizens and on how autonomous weapons can be deployed.

Katrina Mulligan, OpenAI’s head of national security partnerships and a former chief of staff to the secretary of the Army, said during the Ask Me Anything on X that referencing these existing laws and policies provided more assurance that the Pentagon would not later violate the company’s redlines than some critics suggested. “We accepted the ‘all lawful uses’ language proposed by the department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract,” she said. “And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.”

Some legal experts pushed back on Mulligan’s position, at least as far as DOW policies on autonomous weapons are concerned. Charles Bullock, a senior fellow at the Institute for Law & AI, said on X that “DOW can, of course, change its own policies whenever it wants,” and that the contract language OpenAI released does not require the DOW to follow the existing policy in perpetuity. But he said that the contract did seem to bind DOW to following existing interpretations of existing laws governing mass surveillance of U.S. citizens.

Bullock also said it was impossible to know how ironclad the limitations contained in OpenAI’s contract are without assessing the entire contract, not just the small section OpenAI made public. OpenAI has said government rules bar it from publishing the entire contract because it is for a classified system.

A debate over the definition of ‘mass surveillance’

Many of those skeptical of OpenAI’s agreement with the Pentagon noted that the term “mass surveillance” is not well-defined and questioned OpenAI executives on what would happen if military intelligence agencies attempted to use its AI models to analyze commercially available data—such as cell phone location data or data from fitness apps—that could be put together at scale to conduct surveillance of U.S. citizens in America. The Defense Intelligence Agency is believed to have purchased such data, and its use remains a legal gray area. Anthropic, according to a story in The Atlantic, was particularly concerned about the Pentagon using its technology for this kind of analysis and that its insistence on curtailing that use case was one of the major stumbling blocks to breaking its deadlock with the DOW.

“We can’t protect against a government agency buying commercially available datasets, but our contract incorporates a prohibition on mass domestic surveillance as a binding condition of use,” Mulligan said during the AMA.

She also said that OpenAI’s decision to rely on a multipronged approach that included technical systems to limit what the Pentagon could do provided a more robust solution than simply relying on contractual language, which she said seemed to be Anthropic’s primary approach. She said Anthropic had not been able to lean on this technical solution because it was already providing versions of its AI models to the military that had some of the usual safeguards removed.

“Anthropic has primarily been concerned with usage policies, which is because their existing classified deployments involve reduced or removed safety guardrails (making usage policies the primary safeguards in national security deployments),” she said. “Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. That’s what we pursued in our negotiations, and that’s why we think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”

Another OpenAI executive, Boaz Barak, who works on AI alignment and safety, also represented the company in the AMA and criticized Anthropic for fixating so heavily on contractual language and not other kinds of safeguards. “I get the impression that folks at Anthropic had unrealistic expectations for the contract stuff,” he said in response to a question from former OpenAI policy chief Miles Brundage, noting that tech companies were always going to be somewhat at the mercy of how the DOW interpreted terms in the contract.

Who should decide how AI is used?

Altman said that many of the questions in the AMA session touched on the issue of whether AI efforts should be nationalized. The OpenAI CEO noted, “It has seemed to me for a long time it might be better if building AGI [artificial general intelligence] were a government project.” But he added, “It doesn’t seem super likely on current trajectory.”

Altman also said he was surprised by how many of OpenAI’s critics seemed to have more faith in unelected tech executives making decisions about the appropriate use of AI rather than government officials who were, at least in theory, accountable to Congress and ultimately voters. 

“I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the Constitution. I am terrified of a world where AI companies act like they have more power than the government,” Altman said on X. “I would also be terrified of a world where our government decided mass domestic surveillance was okay.”

This story was originally featured on Fortune.com

Ria.city






Read also

Project Shark | WILD HOPE

City cops bust SIM box-based crime syndicate, arrest 2

Mizos in W Asia urged to stay calm & seek help

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости