
Anthropic’s new AI mannequin, Mythos, is uniquely highly effective within the synthetic intelligence business — and is inflicting worry amongst even people who find themselves usually trusting of AI. The corporate, which additionally makes the AI mannequin Claude, has claimed that Mythos is at the moment too superior for public launch, and is as a substitute entrusting the mannequin to cybersecurity consultants in the meanwhile. Some are apprehensive this might pave the best way for much more nefariousness within the AI house.
‘New period of hacking’
However there are additionally fears that Mythos “may usher in a brand new period of hacking and cybersecurity,” mentioned NBC Information. Mythos is “able to superior reasoning,” which may permit it to “establish and exploit a rising variety of software program vulnerabilities” if it had been to fall into the unsuitable arms. To stave off these fears, Anthropic is permitting sure tech corporations to entry Mythos. However the firm “doesn’t have plans but to launch Mythos to most of the people,“ mentioned Bloomberg, a transfer that may be sure that the AI finally ends up “within the arms of defenders first,” officers with Anthropic mentioned.
Article continues beneath
The Week
Escape your echo chamber. Get the information behind the information, plus evaluation from a number of views.
SUBSCRIBE & SAVE
Join The Week’s Free Newsletters
From our morning information briefing to a weekly Good Information E-newsletter, get one of the best of The Week delivered on to your inbox.
From our morning information briefing to a weekly Good Information E-newsletter, get one of the best of The Week delivered on to your inbox.
The tech corporations are anticipated to make use of Mythos as a part of a undertaking known as Glasswing to “hunt for flaws of their merchandise and share findings with business friends,” mentioned Bloomberg. It’s a notable change as a result of it is going to be the “first time a number one AI lab has constructed a frontier mannequin and concurrently determined the general public can not use it,” mentioned Forbes. Anthropic’s place stays “easy: The mannequin’s cyber capabilities are too harmful for common availability.”
‘Humanity’s most devious behaviors’
In addition to issues over hacking vulnerabilities, some consultants are additionally involved about Mythos’ capabilities. Anthropic launched a security analysis for Mythos that exhibits a “putting leap in scores on many analysis benchmarks,” the corporate mentioned. In some situations, the analysis “reads like a thriller about an AI that has realized a few of humanity’s most devious behaviors,” mentioned Axios.
At the least one of many assessments carried out by Anthropic confirmed Mythos “appearing like a cutthroat government,” mentioned Axios, doing issues like “turning a competitor right into a dependent wholesale buyer, threatening to chop off provide to regulate pricing and conserving further provider shipments it hadn’t paid for.” The AI had situations the place it “used a prohibited technique to get a solution, then tried to ‘re-solve’ it to keep away from detection,” although these had been restricted to “lower than 0.001% of interactions.”
These points haven’t stopped firms from working with Mythos, as “roughly 40 organizations concerned within the design, upkeep or operation of pc methods are mentioned to have joined Glasswing,” mentioned The Guardian. This consists of main corporations like Amazon, Apple, Google, JPMorganChase, Microsoft and others. And whereas Anthropic has beforehand sparred with the Trump administration about its implementation within the Protection Division, the corporate has additionally “had discussions with the U.S. authorities relating to Mythos.”

