A follow-up to my previous post on Mythos Preview. The AI Security Institute (AISI) has published a very interesting analysis of Mythos Preview . Very interesting because: AISI is " a mission-driven research organisation in the heart of the UK government ". Its reports are clearly much more credible than claims of the form " our last product is too strong to give you, believe us " by a private US company, that is currently losing lot of money, that is fiercely battling against other companies in the AI arena, that is extremely good at fuelling hype about their products and capabilities. They consider complete cybersecurity tasks, i.e. CTF (capture the flag) competitions and attacks to a simulated organization. They compare the behavior of different models for a given "token budget". Not surprisingly, Mythos Preview is indeed very good and better than previous models, but it is definitely not the coming Apocalipsis. In particular, it is the first tool th...
(updated twice after first posting, see below) On April 7-th 2026, Anthropic issued a technical report titled Assessing Claude Mythos Preview’s cybersecurity capabilities . This report has quickly sparked the all-too-common (and deeply misleading) narrative of an imminent cybersecurity apocalypse due to the (supposedly) immense and groundbreaking capabilities of AI. For example, The New York Times : I’m really not being hyperbolic when I say that kids could deploy this by accident. Mom and Dad, get ready for: "Honey, what did you do after school today?” “Well, Mom, my friends and I took down the power grid. What’s for dinner?” That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do — or your kids. What does Anthropic say? The following paragraphs contain a slightly edited AI-generated summary of the Anthropic report Anthropic has introduced Claude Mythos Preview, a langu...