{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21
22
23
24
25
26
27
28
News Every Day |

AI’s biggest problem isn’t intelligence. It’s implementation

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

The AI ‘arms race’ may be more of an ‘arm-twist’

The big AI companies tell us that AI will soon remake every aspect of business in every industry. Many of us are left wondering when that will actually happen in the real world, when the so-called “AI takeoff” will arrive. But because there are so many variables, so many different kinds of organizations, jobs, and workers, there’s no satisfying answer. In the absence of hard evidence, we rely on anecdotes: success stories from founders, influencers, and early adopters posting on X or TikTok.

Economists and investors are just as eager to answer the “when” question. They want to know how quickly AI’s effects will materialize, and how much cost savings and productivity growth it will generate. Policymakers are focused on the risks: How many jobs will be lost, and which ones? What will the downstream effects be on the social safety net?

Business schools and consulting firms have turned to research to find those answers the question. One of the most consequential recent efforts was a 2025 MIT study, which found that despite spending between $30 billion and $40 billion on generative AI, 95% of large companies had seen “no measurable P&L [profit and loss] impact.”

More recent research paints a somewhat rosier picture. A recent study from the Wharton School found that three out of four enterprise leaders “reported positive returns on AI investments, and 88% plan to increase spending in the next year.”

My sense is that the timing of AI takeoff is hard to grasp because adoption is so uneven and depends a lot on the application of the AI. Software developers, for example, are seeing clear efficiency gains from AI coding agents, and retailers are benefiting from smarter customer-service chatbots that can resolve more issues automatically.

It also depends on the culture of the organization. Companies with clear strategies, good data, some PhDs, and internal AI enthusiasts are making real progress. I suspect that many older, less tech-oriented, companies remain stuck in pilot mode, struggling to prove ROI. 

Other studies have shown that in the initial phases of deployment, human workers must invest a lot of time correcting or training AI tools, which severely limits net productivity gains. Others show that in AI-forward organizations, workers do see substantial productivity improvements, but because of that, they become more ambitious and end up working more, not less.

The MIT researchers included an interesting disclaimer on their research results. Their sobering findings, they noted, did not reflect the limitations of the AI tools themselves, but rather the fact that organizations often need years to adapt their people and processes to the new technology.

So while AI companies constantly hype the ever-growing intelligence of their models, what ultimately matters is how quickly large organizations can integrate those tools into everyday work. The AI revolution is, in this sense, more of an arm-twist than an arms race. The road to ROI runs through people and culture. And that human bottleneck may ultimately determine when the AI industry, and its backers, begin to see returns on their enormous investments.

New benchmark finds that AI fails to do most digital gig work

AI companies keep releasing smarter models at a rapid pace. But the industry’s primary way of proving that progress—benchmarks—doesn’t fully capture how well AI agents perform on real-world projects. A relatively new benchmark called the Remote Labor Index (RLI) tries to close that gap by testing AI agents on projects similar to those given to remote contractors. These include tasks in game development, product design, and video animation. Some of the assignments, based on actual contract jobs, would take human workers more than 100 hours to complete and cost over $10,000 in labor.

Right now, some of the industry’s best models don’t perform very well on the RLI. In tests conducted late last year, AI agents powered by models from the top AI developers including OpenAI, Anthropic, Google, and others could complete barely any of the projects. The top-performing agent, powered by Anthropic’s Opus 4.5 model, completed just 3.5% of the jobs. (Anthropic has since released Opus 4.6, but it hasn’t yet been evaluated on the RLI.)

The test puts the question of the current applicability of agents in a different light, and may temper some of the most bullish claims about agent effectiveness coming from the AI industry. 

Silicon Valley’s pesky ‘principals’ re-emerge, irking the White House and Pentagon

The Pentagon and the White House are big mad at the safety-conscious AI company Anthropic. Why? Because Anthropic doesn’t want its AI being used for the targeting of humans by autonomous drones, or for mass surveilling U.S. citizens. 

Anthropic now has a $200 million contract allowing the use of its Claude chatbot and models by federal agency workers. It was among the first companies to get approval to work with sensitive government data, and the first AI company to build a specialized model for intelligence work. But the company has long had clear rules in its user guidelines that its models aren’t to be used for harm. 

The Pentagon believes that after paying for the technology it should be able to use it for any legal application. But acceptable use for AI is different from that for traditional software. AI’s potential for autonomy makes it more dangerous by nature, and its risks increase the closer to the battle it gets used. 

The disagreement, if not resolved, could potentially jeopardize Anthropic’s contract with the government. But it could get worse. Over the weekend, the Pentagon said it was considering classifying Anthropic as a “supply chain risk,” which would mean the government views Anthropic as roughly as trustworthy as Huawei. Government contractors of all kinds would be pushed to stop using Anthropic.

Anthropic’s limits on certain defense-related uses are laid out in its Constitution, a document that describes the values and behaviors it intends its models to follow. Claude, it says, should be a “genuinely good, wise, and virtuous agent.” “We want Claude to do what a deeply and skillfully ethical person would do in Claude’s position.” To critics in the Trump administration, that language translates to a mandate for wokeness.

The whole dust-up harkens back to 2018, when Google dropped its Project Maven contract with the government after employees revolted against Google technology being used for targeting humans in battle. Google still works with the government, and has softened its ethical guidelines over the years.

The truth is, tech companies don’t stand on principle like they used to. Many have settled into a kind of patronage relationship with the current regime, a relatively inexpensive way to avoid MAGA backlash while keeping shareholders satisfied. Anthropic, in its way, seems to be taking a different course, and it may suffer financially for it. But, in the longer term, the company could earn some respect, trust, and goodwill from many consumers and regulators. For a company whose product is as powerful and potentially dangerous as consumer AI, that could count for a lot. 

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

Ria.city






Read also

Can a rhythm be owned? What a reggaeton lawsuit reveals about how copyright misunderstands music

F1 Bahrain pre-season test: Norris fastest as Ferrari debuts trick rear wing

Cyprus submits ‘targeted proposals’ at Board of Peace inaugural in Washington

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости