{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026 May 2026
1 2 3 4 5 6 7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Grok’s usage is so low that Elon Musk can sell compute to Anthropic

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m focusing on Elon Musk’s decision to lease the computing capacity at SpaceX’s Colossus 1 data center to Anthropic. I also look at what a new Atlantic exposé on David Sacks says about Silicon Valley’s alliance with Trump, and a benchmark that’s stumping top AI coding agents.

Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan

Why Grok is selling compute to Anthropic

While everybody else in the AI space scrambles to lock down computing power, xAI’s Grok models are apparently being used so little relative to peers that the company can sell off the capacity of entire data centers, “colossal” ones at that.

Anthropic said Tuesday it had signed an agreement with SpaceX to use all the computing capacity in SpaceX’s Colossus 1 data center in Memphis. (SpaceX owns xAI.) The deal will give Anthropic access to more than 300 megawatts of computing capacity, or more than 220,000 NVIDIA GPUs. Anthropic says the additional capacity will be used to serve its Claude Pro ($20 per month) and Claude Max ($100 to $200 per month) subscribers.

SpaceX CEO Elon Musk says he gave his much-sought moral stamp of approval to Anthropic. “By way of background for those who care, I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed,” Musk said in an X post. “Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector.”

Musk says xAI had already shifted its training workloads to Colossus 2, freeing up Colossus 1 for Anthropic’s use. Anthropic says it will use the facility primarily for inference, or the processing required to respond to user prompts in real time.

The partnership could eventually extend beyond Earth. Anthropic says it has also been discussing plans with Musk and SpaceX to develop multiple gigawatts of orbital AI compute capacity. Space-based AI data centers hold obvious appeal because the cost of cooling servers would essentially disappear. But major technical hurdles remain, especially around reliably transmitting massive amounts of data between orbiting infrastructure and Earth.

Musk’s willingness to arm Anthropic with vital computing power may also have something to do with his hatred of Anthropic rival OpenAI, and his dislike of OpenAI founder Sam Altman. Musk sued OpenAI, claiming the company’s leadership betrayed its original nonprofit mission to develop AGI for the benefit of humanity rather than for profit.

Trump’s bargain with Silicon Valley on AI may be weakening

The Atlantic’s George Packer, in a new article about former White House “crypto and AI czar” David Sacks, sheds more light on how and why Sacks and other Valley elites went full MAGA before the 2024 election. Now there are signs that the main thing Silicon Valley wanted in exchange for its support may be in jeopardy.

Silicon Valley’s preferred version of its MAGA conversion story is that influential VC Marc Andreessen met with representatives of the Biden administration and was told the administration intended to heavily regulate AI so that only a few big AI labs, and no startups, would be able to comply and survive. Andreessen said Biden wanted to “nationalize or destroy” Silicon Valley. He said Biden wanted to kill the entire cryptocurrency industry. He said he and his partner Ben Horowitz decided to support MAGA right after that meeting.

Biden officials dispute Andreessen’s account of what was said. But Andreessen’s version was enough to set a broader shift in motion among tech elites. Sacks held a fundraiser for Donald Trump in June 2024 in San Francisco’s wealthy Pacific Heights neighborhood. After talking with Trump at the event and on the All-In podcast, Sacks said: “All of his instincts are Let’s empower the private sector; let’s cut regulations; let’s make taxes reasonable; let’s get the smartest people in the country; let’s have peace deals; let’s have growth.”

What Sacks and others were really after was a promise of AI deregulation and more tax cuts. They got the tax cuts, and so far the Trump administration has worked hard to stifle government investigations or regulations targeting the tech industry. Some states have passed laws requiring government oversight, but the administration has been trying to preempt such laws or challenge them in court.

Packer suggests that Sacks, Andreessen, Horowitz, and other Valley elites may also share something in common with much of MAGA: They are white men witnessing a loss of status in society. “Andreessen was willing to pay high taxes and support liberal causes and candidates as long as he was regarded as a hero,” Packer writes.

But Silicon Valley’s fall from grace is not the fault of Democrats, Biden, or “wokism”; it’s the result of government and society slowly realizing that many Silicon Valley elites are not actually driven by idealistic notions of “making the world better.” Instead, they’ve repeatedly shown a willingness to unleash technologies they know may be harmful. The clearest example is Meta, which the government largely allowed to regulate itself while shielding it from many user lawsuits through Section 230, only to watch social media platforms contribute to disinformation, political polarization, and harms to children.

But nothing is permanent with Trump, as so many others have found out, and agreements that no longer provide immediate value can be quickly abandoned.

The White House announced this week that it’s considering a requirement that government officials “vet” new AI models before they can be released. Team Trump was apparently spooked by two things. An AI model from a company it recently declared a supply-chain risk, Anthropic, developed a model called Mythos that can identify software vulnerabilities at scale and devise ways to exploit them. Meanwhile, backlash against the tech industry’s massive data center buildout is becoming increasingly unpopular with parts of the MAGA base and could become a major GOP liability in the midterms.

Maybe tech elites and MAGA don’t mix quite as well as either side once thought.

Meet the new benchmark that’s soundly defeating coding agents

Perhaps the most consequential application of generative AI models so far has been software engineering, where agents generate code and increasingly make high-level architectural decisions. But how do we tell how good an AI software engineer really is? Until now, the industry has largely relied on benchmark tests such as SWE-Bench, which evaluate models on relatively well-defined tasks like fixing bugs or implementing a single feature. Now the developers behind SWE-Bench have introduced a much harder test called ProgramBench.

The benchmark is difficult because the AI agent has to reason strategically about the optimal architecture and programming language needed to reproduce the performance of each of the 200 test programs. Once an agent finishes building a codebase, the benchmark runs roughly 248,000 tests to measure how closely the recreated software matches the original behavior.

So far, all of the major models tested on ProgramBench, including Anthropic’s Claude Opus 4.7, Google’s Gemini 3 Pro, and OpenAI’s GPT-5.4, have scored big fat zeros. In other words, none were able to fully complete the test builds. Several models, however, were able to complete portions of them.

The results suggest that current AI coding tools still are not advanced enough to make the kinds of architectural and systems-level decisions human software engineers routinely make when turning an idea into working software. The findings may also indicate that AI agents still struggle to apply abstract principles learned during training to entirely novel problems.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

Ria.city






Read also

Israeli strikes devastate Lebanese cities despite ceasefire (VIDEOS)

ICE could be unmasked, have expanded restrictions in NY state budget

Everything you eat at The Cheesecake Factory is made in-house, except for its most iconic dish

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости