{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Does Generative AI “Work”? That’s a Misleading Question.

For a decade and a half now, my work has fallen into two categories: collecting evidence on the threat posed by fossil fuels, and deploying written and spoken words to urge action against it. Recently, generative AI systems have entered both of these spheres at a pace I struggle to process.

In jobs that depend on analytical rigour, as well as a desire to craft sincere, authentic and honest human communication, the advent of a ubiquitously available plagiaristic machine that convincingly fabricates facts and feelings seems bad, just on its face. But I don’t think the ethical whiplash is the only reason this moment feels so rotten. There are, in fact, some troubling parallels between how fossil fuels operate and how generative AI operates. 

In the ugly process of sense-making around what is a significant change in how we enact analysis and write words, there has been an exhausting debate around whether generative machine learning “works” or “doesn’t work.” You can find a nice example of this in a December 2024 newsletter by tech writer Casey Newton, in which he slots this fight into two camps: “The first camp, which I associate with the external critics, holds that AI is fake and sucks. The second camp, which I associate more with the internal critics, believes that AI is real and dangerous.”

Many reasonable responses to Newton’s piece highlighted the false dichotomy. Plenty of critiques of AI deployment highlight the fact that it tends not to “work” well at the functions it’s marketed for. And that could be perceived as a good thing: As researcher Eryk Salvaggio observed, “systems that don’t work would pose no threat to labor; systems nobody uses would pose no threat to the environment, and systems propped up by a failing industry will collapse—all we have to do is wait.”

But here’s the problem: Something can feel like it’s “working” when really the work is subtly worse, and paired with a shocking but invisible array of secondary harms. Fossil fuels have themselves been persistently marketed by lobby groups for decades as not only being effective carriers of energy, but valuable humanitarian pathways for the alleviation of poverty. In fact, fossil fuels “work,” but they also murder their end-users both through air pollution that poisons people, and by stimulating the rapid overheating of Earth’s life support systems. They “work” right up until the moment they don’t, such as the deadly failure of fossil gas during Texas’s 2021 winter freeze, or the crippling global impacts of closing a single 100-mile wide channel in the Middle East. 

Fossil fuels “work” only with the severest, most narrow definition of “work,” and it’s the same for generative AI.

User interactions with chatbots as part of day-to-day professional work can indeed produce what some might consider satisfying answers to prompts, thanks to the fundamental nature of what these systems do. Based on their training data and pattern matching capabilities, generative AI tools can produce responses that are convincingly answer-shaped.

If you already have the answer, or deep expertise, you might spot how pattern-matched text output misses nuance, is incorrect, or is misleading in critical ways. But if you have those things, you wouldn’t be asking the chatbot in the first place. The failure mode of generative AI outputs is subtle and insidious, rather than immediately obvious.

For some time, I’ve been using different generative AI systems to duplicate—shadow, if you will—work I’m already doing in the course of my normal workday, often in response to a function I’ve seen someone demonstrate online. Sometimes, it delivers both accurate information and reasonable references; similar to a successful Google search. But go beyond narrow, simple factoids, and things go haywire very quickly. When I prodded Anthropic’s Claude to provide some quotes from me, it initially refused on “copyright” grounds—principled, if a bit rich. Then it suggested readers go visit my Substack: a newsletter platform I do not use, and which I have frequently criticized for courting and monetizing viciously racist neo-Nazi groups.

I recently heard from a friend in energy analysis that you can extract tabulated data from a chart using Gemini, or Claude.  I’ve been doing this using manually-assisted tools for years, but drag and drop makes this a game changer.

To test this, I created a chart of year-to-year U.S. power sector emissions from data I already had, and then asked AI tools to do the reverse: generate the data based off the chart. Then I compared the resulting table to the original. The data were close-ish, but with slight differences. I created a chart from the new, reverse-engineered data and ran it again, and repeated that process four times. After four runs, the emissions totals for some years saw a variance of 8 percent, with an average shift of about 2 percent for all the data points. This might not be a big deal if someone uses it once, perhaps for a social media post. But what happens when everybody is processing visual and tabulated data like this? This game of telephone will compound errors in a completely untraceable and unauditable way. 

When I challenged Claude over the results, it responded, unnervingly, with fake human characteristics, stating “I was estimating the values by eye,” and “each time I looked at the image fresh, my estimates drifted slightly.” It’s a stunning example of the serious problem posed by the intentionally programmed anthropomorphism that’s baked into modern chatbots; a major part of the illusion of competency.

In another shadow of work I’d done, I asked Claude to determine the emissions intensity of a specific type of fossil gas turbine. It returned a wrong-but-in-the-ballpark number, with a mix of correct and irrelevant references, and then turned into a puddle of apologetic shame when queried on calculations and sources. The output text said: 

“I don’t actually know the specific calculation I used.”

“The figure was generated from pattern-matching in my training data — not from executing a traceable calculation. I presented it with more authority than was warranted, and that was misleading.”

Playing specifically with Claude, compared to ChatGPT and Gemini, has given me useful insights into why this product has been so specifically alluring to those in professional analytical spaces.

In both cases, an analyst feeling lazy, over-worked, or over-confident in Claude’s abilities would accept the outputs of the software system without checking their provenance. Unlike OpenAI, which wields sycophancy to the extent of inducing deadly parasocial relationships, Claude seems to use the tone of a shameful, coy intern, unnervingly well-calibrated for the ego of Gen X and millennial professionals.

This approach seems to be working: Anthropic regularly draws breathless headlines around the potential “consciousness” of their pattern matching software, and this has somehow wormed its way into the normally AI-skeptical space of Bluesky. In conjunction with the viral QuitGPT campaign targeting OpenAI while urging increased sign-ups for Claude, it has become an extremely common occurrence for me to experience in my professional life someone who has replaced their own analytical craft and authorial voice with the outputs of chatbot prompts.

Anthropic disclaims its own failures prominently. It states on its website that “Claude can write things that might look correct but are very mistaken,” and that “users should not rely on Claude as a singular source of truth.” I bet that designers of these systems know full well that, in most cases, they really are being used as a “singular source of truth.” That is, in fact, their selling point. 

This slow and compounding corrosion of my field comes with its own clear environmental cost. When queried by Axios on the climate damage of their products, Anthropic pointed to a post by a pro-AI blogger framing single queries as relatively minor compared to daily energy use. (Unlike other major technology companies, Anthropic does not disclose any energy and emissions data of any kind, and has in fact urged the U.S. government to set a “target” for building fossil gas plants.)

The focus on single queries obscures the fact that tasks such as coding or the operation of “agents” performing tasks autonomously requires significantly more energy than single queries, with one estimate by a software engineer putting a single day of their own Claude code usage just under the energy consumption of a household’s fridge. Perhaps more importantly, chatbots replace energy efficient analysis with inefficient, inaccurate forms of digital bloat. My own estimates suggest calculating basic numbers using a chatbot interface is several million times more energy intensive than using a calculator.

What does it mean for climate and energy advocacy to move its informational infrastructure away from effortful analysis and authentic communication and into reliance on corporate-controlled systems of unprecedented digital bloat? What happens to our understanding of climate, fossil fuels and energy when everything we analyse and say breeds a new generation of subtle but material incorrectness, and what happens ten or twenty generations down the line?

I’ve seen consumer boycotts of chatbots framed, reasonably, in moral and environmental terms. But what I’m seeing play out in professional spaces feels so much deeper and more significant than that. This isn’t about whether AI “works.” This is about whether we want to be truthful and honest, and both those goals can’t be reached when relying on a system designed to fabricate the aesthetics of accuracy and sell it through programmed and weaponised faux humanity.

Ria.city






Read also

Trump announces AI chatbot ban while Pentagon uses it to prepare Iran attacks

Mezcal’s popularity is booming in the US. That comes with a growing environmental cost in Mexico

Severe storms, possible tornadoes to threaten DC region Monday, followed by Tuesday chills

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости