{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

AI’s ability to see ‘mirages’ shows how alien machine brains really are

Hello and welcome to Eye on AI. In this edition…Anthropic suffers multiple sensitive data leaks…OpenAI ditches Sora, and loses its deal with Disney…Mistral raises money for AI data center drive…AI could reduce political polarization…and why countries that are late to adopt AI could be in even worse economic shape than you think.

The big news this week was my colleague Beatrice Nolan’s scoop from Friday that Anthropic has trained a new AI model, called “Mythos” (Capybara seems to be the internal code name for the same model), that the company says represents a “step change” in capabilities. Anthropic is particularly worried about the cybersecurity risks the model poses. Ironically, we found out about this new model because Anthropic inadvertently spilled the beans by leaving a draft blog post about it in an unsecured and publicly searchable database—along with other potentially sensitive documents about an upcoming CEO retreat and some internal documents that mentioned employees’ paternity leave.

Now, just today, it appears Anthropic has suffered another major security lapse, accidentally leaking the code of the agentic harness that sits around Claude Code. Bea has more on this latest, and potentially more consequential, data leak here. Meanwhile, Axios reports that the new cybersecurity capabilities of AI models are getting so concerning that Anthropic and OpenAI have both recently told the government about the new dangers of the models they are developing and provided government security experts with early access.

Intern, expert, or dog?

Ok, now, if you own a dog, as I do, there will be moments you’ll recognize that we fundamentally don’t understand how dogs perceive the world.

This week, while walking my dog, I spied a strikingly beautiful cat with an unusual coat. It looked like an orange tabby mixed with gray tabby, with a good deal of white fur thrown in the mix too. I noticed the cat right away, but it was moving across a yard that was elevated from sidewalk level, so my dog couldn’t see it. She could definitely smell it, however. She put her nose in the air and tugged at her leash, pulling her way up the steps that led to the yard.

By the time she got to the top step, the cat had mostly hidden itself behind a nearby flower pot. It stood behind the pot motionless, but with its white head popping above the pot’s edge. It stared intently at my dog and me. I could see the cat quite clearly. But, despite being just 15 feet away, my dog could not. She sniffed the air intently and pivoted first left and then right, but she could not see the cat, even when seemingly looking directly at it.

Eventually, I persuaded my dog to give up her hunt for the unseen, but well-smelled, cat, and continue our walk. But I couldn’t stop thinking about our differences in perception—and how this applies to AI. People often offer executives advice for how they should think about using AI by making analogies to our relationships with various categories of people. Treat AI agents like talented interns, was a popular one a few years ago, in the months following ChatGPT’s debut. A graduate student who is occasionally off their meds, was a colorful variant that Emad Mostaque, the cofounder and former CEO of Stability AI, liked to use. You should treat AI like PhD.-level researchers, was an analogy in vogue last year. (OpenAI CEO Sam Altman was among those talking about this idea.) More recently, people have started saying it is better to regard AI models like wise and experienced, but occasionally still fallible, colleagues. Certainly their performance on certain tough benchmarks of professional tasks, such as OpenAI’s GDPval, would lead one to endorse that idea. Middle managers is another analogy that comes up often.

But the more we learn about the large language models that underpin today’s AI agents, the more clear it becomes how inadequate all these analogies are. LLMs are nothing like people at all. They are far more like other species, like your dog. We can no more understand what and how these LLMs perceive and reach their outputs than we can truly understand the thoughts of our pets.

Actually, it’s worse than this, because unlike with our pets, you can ask an LLM to explain to you what it’s thinking and it will tell you. That sounds like a great thing, much better than the situation with our non-verbal dogs, cats, and turtles. The problem: Researchers have begun probing the activations of the artificial neurons in AI’s digital brains, and these experiments indicate that what an AI model tells you it is thinking—the model’s so-called “reasoning traces”—may or may not actually reflect what it is, in fact, thinking.

So interacting with an LLM is probably the closest thing we’ve had so far to interacting with an alien, one that has some capabilities that far exceed our own, but also has glaring weaknesses, and which can, at times, be just like us—deceptive, dishonest, or dissembling.

Multimodal models see ‘mirages’

This past week has brought yet more evidence of how weird these models are. A paper from researchers at Stanford University showed that multimodal AI models, those that can accept inputs in both text and images (and sometimes audio files too), suffer from a phenomenon they dubbed “mirage reasoning.”

The models will purport to analyze images a user has never actually uploaded to them. When prompted about medical images, but not actually supplied any images, the models will nonetheless offer diagnoses. Weirder still, these assessments are often correct. When the researchers tested the models on benchmarking tests for multimodal AI, the models obtained what the scientists said were “strikingly high scores”—about 70% to 80% of the scores they obtained when they did have access to images. Worryingly, the researchers found the models had a tendency to find evidence of pathologies in the phantom images, showing that the models may have a bias towards diagnosing disease that could lead to dangerous and expensive misdiagnoses if used in real-world medical settings.

The models’ sight is weak; their text pattern finding, unparalleled

The researchers have no clear understanding of exactly why the language models engage in mirage reasoning, or why they can score so highly on the benchmarks even when the images are not provided. But one experiment they conducted does suggest a possible explanation. The researchers fine-tuned a version of the open source AI model from Alibaba, Qwen-2.5, on a public training set for a popular benchmark that is designed to test how well AI models can answer questions about chest X-rays. But they trained it on this set with the accompanying images removed. They picked Qwen-2.5 in part because, at just 3 billion parameters, it is a relatively small model and therefore easy to fine-tune. But more importantly Qwen-2.5 was released a year before the chest X-ray benchmark they were using debuted, which the scientists hoped would minimize the chance that the set of questions actually used for the test itself would have ended up in Qwen-2.5’s initial pre-training data. (This kind of “data leakage” is a real problem for validity of AI benchmarks and a reason they need to be continually updated; otherwise models just memorize the answers as part of their pre-training.)

Nonetheless, this fine-tuned version of Qwen-2.5 outperformed every frontier AI model tested on the normal, image-included version of the X-ray challenge. It also beat the scores of human radiologists by 10%. Again, even though it did not have access to any of the images! The scientist found the model, despite never seeing any images, offered “reasoning traces comparable to, and in some cases indistinguishable from, those of the ground-truth or those generated by frontier multi-modal AI models.”

This implies, the scientists said, that there are hidden patterns in the questions themselves, perhaps in their phrasing, or in the structure of how those questions appear in the benchmark test, that are too subtle for any human to detect, but that nonetheless are sufficient to allow the model to guess the answer. This, combined with the researchers’ other findings, seems to suggest that multimodal models barely use the visual inputs they are given at all and instead lean heavily on linguistic patterns even when being asked to analyze images. It also suggests, alarmingly, that most of the multimodal benchmarks may not provide a good measure of how these models will perform in real-world clinical settings. 

Again, this is totally bizarre and alien to the way humans work. This is like my dog, able to smell the cat, but not see it—while I relied on my sense of sight, but could smell nothing. Our tendency to wrongly anthropomorphize AI models may lead us to misdesign the systems we use to run and govern AI agents, with potentially bad consequences. It also speaks to the way AI systems continue to improve in capability but lag in reliability that I wrote about last week. We need to engineer our AI workflows for alien minds, not our own.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news, if you haven’t yet read my colleague Sharon Goldman’s magisterial feature story on how construction of Meta’s massive Hyperion data center is upending the lives of people who live in rural Richland Parish, Louisiana, drop whatever it is you are doing right now, and go and read it. Here’s the link. It’s a deeply reported and deeply nuanced portrait of what happens when a community suddenly finds itself living in ground zero for the biggest, most expensive infrastructure build-out in American history. 

This story was originally featured on Fortune.com

Ria.city






Read also

Uzbekistan World Cup 2026 Schedule: Locations, Dates, Times

‘Go get your own oil’: Trump’s message to allies who haven’t backed war in Iran

ODOT urges safety as construction season begins

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости