Anthropic's potent new AI model is a "wake-up call," security experts say

2 hours ago 4

Anthropic's latest AI technology, called Mythos, is truthful almighty astatine revealing bundle vulnerabilities that the institution is acrophobic to merchandise the exemplary publically lest it autumn into the hands of atrocious actors.

The company, the developer down the Claude AI chatbot, said successful a post connected its website this week that the caller instrumentality has already uncovered thousands of anemic points successful "every large operating strategy and web browser." That is stirring interest that hackers could exploit Mythos to onslaught banks, hospitals, authorities systems and different captious infrastructure.

Preparing for the "storm"

Rather than releasing Mythos to the public, Anthropic is sharing the tech with a prime radical of large companies, including Amazon, Apple, Cisco, JPMorgan Chase and Nvidia, to trial the exemplary and to harden their ain systems against cyberattacks. Called Project Glasswing, the effort aims to make beardown defenses earlier hackers get entree to Mythos oregon akin AI models. 

"What we request to bash is look astatine this arsenic a wake-up telephone to say, the tempest isn't coming — the tempest is here," Alissa Valentina Knight, CEO of cybersecurity AI institution Assail, told CBS News. "We request to hole ourselves due to the fact that we couldn't support up with the atrocious guys erstwhile it was humans hacking into our networks. We surely can't support up present if they're utilizing AI due to the fact that it's truthful overmuch devastatingly faster and much capable."

Mythos' capabilities are besides sparking interest among national officials. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with apical slope CEOs successful a closed-door gathering connected Tuesday to sermon Mythos and different emerging cybersecurity risks stemming from AI. Anthropic besides briefed elder U.S. authorities officials and cardinal manufacture stakeholders connected Mythos's capabilities, CBS News has learned.

Anthropic didn't instrumentality a petition for comment. In its post, however, the institution underscored the risks of misusing tools similar Mythos. "The fallout — for economies, nationalist safety, and nationalist information — could beryllium severe," the institution said. 

The weakest link 

Such stark warnings, however, disguise different troubling reality: Cybersecurity experts accidental that hackers already person entree to precocious AI models. 

Other AI tools, portion not yet arsenic effectual arsenic Mythos successful exposing the brushed underbelly successful software, are already amplifying the risks to consumers, businesses and governments. For instance, hackers are already tapping AI to sharpen alleged phishing attacks aimed astatine prying escaped confidential information, said Zach Lewis, the main accusation serviceman astatine the University of Health Sciences and Pharmacy successful St. Louis. 

"It's been utilized to truly publication those dialogues, those conversations, those phishing emails, to circumstantial radical — and truly customize them to marque them a batch much hard to observe and place if these are fake oregon not," helium told CBS News.

AI is besides driving much ransomware attacks, with a caller PwC report uncovering that posts connected ransomware leak sites — nationalist disclosures of stolen information erstwhile a institution does not wage a ransom — surged 58% successful 2025 from the anterior year.

"Once [Mythos] drops, we're going to spot a batch much vulnerabilities, astir apt a batch much attacks," Lewis said. "Cyberattacks are decidedly going to summation until we get to a constituent wherever we're patching up each those vulnerabilities astir successful existent time."

AI is much effectual than humans astatine uncovering bundle bugs due to the fact that it tin rapidly scan thousands of lines of codification and observe problems, thing radical are not needfully bully at, Knight explained.

"Humans are the weakest nexus successful security," Knight noted. "Humans person the quality to marque mistakes erstwhile we're penning code. It's imaginable for vulnerabilities successful root codification to person ne'er been recovered by humans." 

On marque for Anthropic?

Some information experts questioned the motives down Anthropic's incremental attack to rolling retired Mythos, speculating that the constricted merchandise could beryllium aimed astatine stirring intrest from different prospective customers. 

Meanwhile, some Anthropic and rival OpenAI are expected to motorboat archetypal nationalist offerings by the extremity of the year, according to the Wall Street Journal — a imaginable inducement to drum up headlines, said Peter Garraghan, Founder and Chief Science Officer astatine Mindgard, an AI information platform.

"I fishy Anthropic whitethorn beryllium utilizing this arsenic a selling ploy, possibly towards IPO," helium said.

Anthropic has sought to separate its marque from OpenAI and different rivals by publically emphasizing AI safety, highlighting its guardrails for keeping the exertion successful line. Anthropic's determination to clasp disconnected connected releasing Mythos and launching Project Glasswing aligns with that image, noted Columbia Business School selling lecturer Malek Ben Sliman.

"When facing the pugnacious decisions, Anthropic has really been existent to its values," helium said. Curating the merchandise of Mythos "does let them to look to beryllium the protectors of this liable AI, but it besides is simply a large selling and advertizing tool."

Edited by Alain Sherter

In:

Behind Anthropic's alarming AI exemplary

What we cognize astir Anthropic's new, alarming AI model 05:15

What we cognize astir Anthropic's new, alarming AI model

(05:15)

Read Entire Article