Final week, Anthropic gathered twelve of the world’s largest know-how firms to share an uncomfortable discovering. Its strongest AI mannequin had spent a number of weeks autonomously figuring out safety flaws in extensively used software program, together with vulnerabilities that had gone undetected for almost three a long time.
That disclosure got here alongside the overall launch of Claude Opus 4.7. Anthropic is utilizing the newer mannequin to check the safety controls it wants earlier than it might responsibly launch the extra succesful one. For enterprise consumers, each developments matter.
Analysis from Gravitee, printed in February 2026, discovered that 81% of enterprise groups have moved previous the planning section for AI brokers. But solely 14.4% have full safety or IT approval for the brokers they run. That governance hole appears to be like significantly extra severe in gentle of what Anthropic disclosed this week.
What Opus 4.7 modifications for enterprise groups
The core drawback with working AI brokers at scale has at all times been reliability. Fashions that drop context between classes, stall on complicated duties, or want supervising at each step eat up extra time than they save.
Opus 4.7 addresses a number of of these points. It checks its personal outputs earlier than reporting again, retains context throughout classes, and follows directions extra exactly than its predecessor. For groups working multi-day workflows, that context retention issues most. Re-establishing background at the beginning of every session is an actual operational value that almost all productiveness assessments overlook.
Enterprise testers reported measurable positive factors. Notion noticed a 14% enchancment on complicated multi-step workflows with a 3rd fewer device errors. Additionally they stated it was the primary mannequin to cross their implicit-need exams, the place the mannequin works out necessities with out express instruction. Ramp discovered it wanted far much less step-by-step steerage throughout duties spanning a number of instruments and codebases.
Picture decision has elevated to greater than thrice that of earlier Claude fashions. That makes doc processing and dense interface work extra sensible. These working Claude inside Microsoft 365 will see that enchancment throughout Groups, Outlook, and OneDrive workflows. Pricing stays at $5 per million enter tokens and $25 per million output tokens.
The safety discovering each IT chief must learn
Utilizing Claude Mythos Preview, Anthropic autonomously discovered hundreds of vital zero-day vulnerabilities. These spanned each main working system and net browser. One was a 27-year-old flaw in OpenBSD that allow attackers remotely crash machines. One other was a bug in FFmpeg that automated testing instruments had run 5 million instances with out flagging. Maintainers have now mounted all of them.
As UC Immediately lined individually this week, the importance will not be the person bugs. It’s {that a} succesful AI mannequin can now discover severe vulnerabilities at scale, autonomously, and sooner than any present testing course of. The typical value of an information breach stands at $4.4 million. Unified communications environments, constructed on browsers, shared media libraries, APIs, and virtualised infrastructure, sit squarely in scope.
Undertaking Glasswing, Anthropic’s response, brings collectively AWS, Cisco, CrowdStrike, Google, Microsoft, Palo Alto Networks, and others. The group dedicated $100M in mannequin credit to scanning and hardening vital software program infrastructure. Additionally they directed an extra $4M to open-source safety organisations. Microsoft, which has been constructing its personal AI safety agent infrastructure in parallel, joined as a founding member.
Opus 4.7 is the primary Claude mannequin to ship with automated safeguards that block high-risk cybersecurity makes use of. Anthropic describes it as a take a look at mattress for the controls wanted earlier than Mythos-class fashions can attain a wider viewers. Safety professionals with reliable necessities can apply by the brand new Cyber Verification Programme.
Deloitte’s 2026 enterprise AI report discovered that just one in 5 firms has a mature governance mannequin for autonomous AI brokers. For IT and safety leads, that determine and this week’s information belong in the identical dialog.









