{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
28
29
30
31
News Every Day |

Exclusive: Anthropic acknowledges testing new AI model representing ‘step change’ in capabilities, after accidental data leak reveals its existence

AI company Anthropic is developing and has begun testing with early access customers a new AI model more capable than any it has released previously, the company said, following a data leak that revealed the model’s existence. 

An Anthropic spokesperson said the new model represented “a step change” in AI performance and was “the most capable we’ve built to date.” The company said the model is currently being trialed by “early access customers.”

Descriptions of the model were inadvertently stored in a publicly-accessible data cache and were reviewed by Fortune.

A draft blog post that was available in an unsecured and publicly-searchable data store prior to Thursday evening said the new model is called “Claude Mythos” and that the company believes it poses unprecedented cybersecurity risks.

The same cache of unsecured, publicly discoverable documents revealed details of a planned, invite-only CEO summit in Europe that is part of the company’s drive to sell its AI models to large corporate customers. 

The AI lab left the material, including what appeared to be a draft blog post announcing a new model, in an unsecured, public data lake, according to documents separately located and reviewed by Roy Paz, a senior AI security researcher at LayerX Security, a computer and network security company, and Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge. 

In total, there appeared to be close to 3,000 assets linked to Anthropic’s blog that had not been published previously on the company’s news or research sites that were nonetheless publicly-accessible in this data cache, according to Pauwels, who Fortune asked to assess and review the material.

After being informed of the data leak by Fortune on Thursday, Anthropic removed the public’s ability to search the data store and retrieve documents from it.

In a statement provided to Fortune, Anthropic acknowledged that a “human error” in the configuration of its content management system led the draft blog post to being accessible. It described the unpublished material that was left in an unsecured and publicly-searchable data store as “early drafts of content considered for publication.”

As well as referring to Mythos, the draft blog post also discussed a new tier of AI models that it says will be called “Capybara”. In the document, Anthropic says: “’Capybara’ is a new name for a new tier of model: larger and more intelligent than our Opus models—which were, until now, our most powerful.” Capybara and Mythos appear to refer to the same underlying model.

Currently, Anthropic markets each of its models in three different sizes: the largest and most capable model versions are branded Opus, while a slightly faster and cheaper, but less capable, versions are branded Sonnet, and the smallest, cheapest, and fastest are called Haiku. However, in the blog post, Anthropic describes Capybara as a new tier of model that is even larger and more capable than Opus, but also more expensive.

“Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others,” the company said in the blog.

The document also said the company had completed training “Claude Mythos,” which the draft blog post described as “by far the most powerful AI model we’ve ever developed.”

In response to questions about the draft blog post, the company acknowledged training and testing a new model. “We’re developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity,” an Anthropic spokesperson said. “Given the strength of its capabilities, we’re being deliberate about how we release it. As is standard practice across the industry, we’re working with a small group of early access customers to test the model. We consider this model a step change and the most capable we’ve built to date.”

The document Fortune and the cybersecurity experts reviewed consists of structured data for a webpage, complete with headings and a publication date, suggesting it forms part of a planned product launch. It outlines a cautious rollout strategy for the model, beginning with a small group of early-access users. The draft blog notes that the model is expensive to run and not yet ready for general release.

Significant new cybersecurity risks

The new AI model poses significant cybersecurity risks, according to the leaked document. 

“In preparing to release Claude Capybara, we want to act with extra caution and understand the risks it poses—even beyond what we learn in our own testing. In particular, we want to understand the model’s potential near-term risks in the realm of cybersecurity—and share the results to help cyber defenders prepare,” the document said.

Anthropic appears to be especially worried about the model’s cybersecurity implications, noting that the system is “currently far ahead of any other AI model in cyber capabilities” and “it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.” In other words, Anthropic is concerned that hackers could use the model to run large-scale cyberattacks.

The company said in the draft blog that because of this risk, its plan for the model’s release would focus on cyber defenders: “We’re releasing it in early access to organizations, giving them a head start in improving the robustness of their codebases against the impending wave of AI-driven exploits.”

The latest generation of frontier models from both Anthropic and OpenAI have crossed a threshold that the companies say poses new cybersecurity risks. In February, when OpenAI released GPT-5.3-Codex, the company said it was the first model it had classified as “high capability” for cybersecurity-related tasks under its Preparedness Framework—and the first it had directly trained to identify software vulnerabilities. 

Anthropic, meanwhile, navigated similar risks with its Opus 4.6, released the same week. The model demonstrated an ability to surface previously unknown vulnerabilities in production codebases, a capability that the company acknowledged was dual-use, meaning that it could both help hackers as well as help cybersecurity defenders find and close vulnerabilities in code.

The company has also reported that hacking groups, including those linked to the Chinese government, have attempted to exploit Claude in real-world cyberattacks. In one documented case, Anthropic discovered that a Chinese state-sponsored group had already been running a coordinated campaign using Claude Code to infiltrate roughly 30 organizations—including tech companies, financial institutions, and government agencies—before the company detected it. Over the following ten days, Anthropic investigated the full scope of the operation, banned the accounts involved, and notified affected organizations.

An exclusive executive retreat

The leak of not-yet-public information appears to stem from an error on the part of users of the company’s content management system (CMS), which is the software used to publish the company’s public blog, according to cybersecurity professionals. 

Digital assets created using the content management system are set to public by default and typically assigned a publicly accessible URL when uploaded—unless the user explicitly changes a setting so that these assets are kept private. As a result, a large cache of images, PDF files, and audio files seem to have been published erroneously to an unsecured and publicly-accessible URL via the off-the-shelf content management system.

Anthropic acknowledged in a statement to Fortune that “an issue with one of our external CMS tools led to draft content being accessible.” It attributed this issue to “human error.” 

Many of the documents appeared to be discarded or unused assets for past blog posts like images, banners, and logos. However, several appeared to be what were meant to be private or internal documents. For example, one asset has a title that described an employee’s “parental leave.” 

The documents also included a PDF containing information about an upcoming, invite-only retreat for the CEOs of European companies being held in the U.K., and which Anthropic CEO Dario Amodei will attend. Names of the other attendees are not listed, but are described as Europe’s most influential business leaders.

The two-day retreat is described as an “intimate gathering” to engage in “thoughtful conversation” at an 18th-century manor-turned-hotel-and-spa in the English countryside. The document says that attendees will hear from lawmakers and policymakers about how businesses are adopting AI and experience unreleased Claude capabilities.

An Anthropic spokesperson told Fortune the event “is part of an ongoing series of events we’ve hosted over the past year. We look forward to hosting European business leaders to discuss the future of AI.”

This story was originally featured on Fortune.com

Ria.city






Read also

Teachers summoned for making reels during class hours in Faridabad

Hungarians' growing anger at living in EU's 'most corrupt state'

Tori Spelling Wears Revealing Sheer Red Dress on iHeartRadio Music Awards 2026 Carpet

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости