On April 7-th 2026, Anthropic issued a technical report titled Assessing Claude Mythos Preview’s cybersecurity capabilities . This report has quickly sparked the all-too-common (and deeply misleading) narrative of an imminent cybersecurity apocalypse due to the (supposedly) immense and groundbreaking capabilities of AI. For example, The New York Times : I’m really not being hyperbolic when I say that kids could deploy this by accident. Mom and Dad, get ready for: "Honey, what did you do after school today?” “Well, Mom, my friends and I took down the power grid. What’s for dinner?” That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do — or your kids. What does Anthropic say? The following paragraphs contain a slightly edited AI-generated summary of the Anthropic report Anthropic has introduced Claude Mythos Preview, a language model with advanced capabilities in cybers...
My Cybersecurity course has a lot of technical detail. Maybe not as much as some students wish, at least in certain topics, but finding the appropriate balance between breadth and depth is difficult. I try to convey to students an important message, though: in order to understand the dynamics of cybersecurity in the real world (" why we are still not applying fundamental principles formulated 50 years ago? ", " why there are so many vulnerabilities? ", " why such an obvious defense is not ubiquitous? "), one must never think solely in technical terms or even worse, in moral terms (" you have to make sure that your code does not have any vulnerabilities, otherwise you will be a sinner and go to hell!", " company X is evil because does not release patches for its vulnerable software! "). What I tell to students is that one must always think in economical terms ( "yes, this defense is interesting...but what is its cost in terms of f...