Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

AI hyperscalers need to restore trust—here’s how

It’s hard to avoid the conclusion that the market for artificial intelligence and its associated industries are over inflated. In 2025, just five hyperscalers—Alphabet, Meta, Microsoft, Amazon, and Oracle—accounted for a capital investment of $399 billion, which will rise to over $600 billion annually in coming years. For the first nine months of last year, real GDP growth rate in the U.S. was 2.1%, but would have been 1.5% without the contribution of AI investment.

This dependence is dangerous. A recent note by Deutsche Bank questioned whether this boon might in fact be a bubble, noting the historically unprecedented concentration of the industry, which now accounts for around 35% of total U.S. market capitalization, with the top 10 U.S. companies making up more than 20% of the global equity market value. For such an investment to yield no benefit would be a failure of unprecedented proportions.

In their book Power and Progress, Nobel Prize-winning economists Daron Acemoglu and Simon Johnson narrate the calamitous failure of the French Panama Canal project in the late 19th century. Thousands of investors, large and small, lost their fortunes, and 20,000 people who worked on the project died for no benefit. The problem, Acemoglu and Simon write, was that the vision for progress did not include everyone—and failure to incorporate feedback from others resulted in poor quality decision making. As they observe, ”what you do with technology depends on the direction of progress you are trying to chart and what you regard as an acceptable cost.“

Fast forward 150 years and a significant chunk of the U.S. economy is similarly dependent on a small coterie of grand visionaries, ambitious investors, and techno-optimists. Their capacity to ignore their critics and sideline those forced to bear the costs of their mission risks catastrophic consequences. Trustworthy AI systems cannot be conjured by marketing magic. We must ensure those building, deploying, and working with these systems can have a say in how we direct the progress of this technology.

Mistrust and a general lack of optimism

The data suggests that there is an urgent need to chart a new course. Even a generous analysis of the market for generative AI products would likely struggle to show how a decent return on the gargantuan investment in capital is realistic. A recent report from MIT found that  notwithstanding $30 billion to $40 billion in enterprise investment into GenAI, 95% of organizations are getting zero return. It is difficult to imagine another industry raising so much capital despite producing so little to show for it. But this appears to be Sam Altman’s true superpower, as Brian Merchant has documented extensively.

This is coupled with significant levels of mistrust and a general lack of optimism from everyday people about the potential of this technology. In the most comprehensive global survey of 48,000 people across 47 countries, KPMG found that 54% of respondents are wary about trusting AI. They also want more regulation: 70% of respondents said regulation is necessary, but only 43% believe current laws are adequate. The report concludes that the most promising pathway towards improving trust in AI was through strengthening safeguards, regulation, and laws to promote safe AI use.

This, most obviously, sits in stark contrast with the position of the Trump administration, which has repeatedly framed regulation of the industry as an impediment to innovation. But the trust deficit cannot simply be hyped out of existence. It represents a significant structural barrier to the take up and valuable deployment of emerging technologies.

One of the key conclusions of the MIT report is that the small subset of companies that actually saw productivity gains from generative AI products were doing so because ”they build adaptive, embedded systems that learn from feedback.” Highly centralized decisions about procurement were more likely to result in employees being required to use off-the-shelf products unsuited to the enterprise environment and generating outputs that employees mistrusted, especially for higher-stakes tasks, resulting in work arounds or dwindling rates of usage. The problem is that these tools fail to learn and adapt. In turn, there are too few opportunities for executives to receive that feedback or incorporate it meaningfully into model development and adaptation.

The narrative spun by politicians and media commentators that the AI industry is full of visionary leaders inadvertently points to a key cause of why these products are failing. Trust in AI systems can only be earned if feedback is both sought and acted on—which is a significant challenge for the hyperscalers, because their foundational models are less capable of adapting and responding to unique and varied contexts. Unless we decentralize the development and governance of this, the benefits may remain elusive.

The workers’ view

There are useful ideas lying around that could help navigate a different path of technological progress. The Human Technology Institute at the University of Technology Sydney published research about how workers are treated as invisible bystanders in the roll out of AI systems. Through deep, qualitative consultations with nurses, retail workers, and public servants to solicit feedback about automated systems and their impact on their work.

Rather than exhibiting backward or unhelpful attitudes to AI, workers expressed nuanced and constructive contributions to the impact on their workplaces. Retail workers, for example, talked about the difficulties of automated systems that disempowered workers, and curtailed their discretion: “unlike a production line, retail is an unpredictable environment. You have these things called customers that get in the way of a nice steady flow.”

A nurse noted how “the increasing roll-out of automated systems and alerts causes severe alarm fatigue among nurses. When an (AI system) alarm goes off, we tend to ignore or not take it seriously. Or immediately override to stop the alarm.”

One might think that increased investment in such systems would contend with the problem of alarm fatigue. But without worker input, it’s easy to miss this as a problem entirely. The upshot is that, as one public servant put it, in workplaces where channels for worker feedback are absent, a necessary quality of employees was “the gift of the workaround.”

Traditionally, this kind of consultation and engagement would happen through worker organizations. But with the rate of unionization slipping below 10% in the U.S., this becomes a problem not just for workers but also employers, who are left with few methods to meaningfully engage with their workforce at scale.  

Some unions are nonetheless leading on this issue, and in the absence of political leadership, might be the best hope of making change. The AFL-CIO has developed a promising model law aimed at protecting workers from harmful AI systems. The proposal focuses on limiting the use of worker data to train models, as well as introducing friction into the automation of significant decisions, such as hiring and firing. It also emphasizes giving workers the right to refuse to follow directives from AI systems—essentially, building in feedback loops for when automation goes wrong. The right to refuse is an essential failsafe that can also cultivate a culture of critical engagement with technology, and serve as a foundation for trust.

Businesses are welcome to ignore workers’ views, but workers may end up making themselves heard in other ways. Recent surveys indicate that 31% of employees admit to actively sabotaging their company’s AI strategy, and for younger workers, the rates are even higher. Even companies that fail to seek feedback from workers may still end up receiving it all the same.

Our current course of technological progress relies on narrow understandings of expertise and places too much faith in small numbers of very large companies. We need to start listening to the people who are working with this technology on a daily basis to solve real world problems. This decentralization of power is a necessary step if we want technology that is both trustworthy and effective.

Ria.city






Read also

Strategic Disruption from Orbit: Space-Based Capabilities for Irregular Warfare in the Indo-Pacific

10 Chance Encounters That Formed Legendary Bands

No. 11 BYU poses tall task for TCU as part of tough road trip

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости