Where Money Talks & Markets Listen
Dark
Light

AI Hacking Becomes Industrial Threat

May 11, 2026
ai-hacking-becomes-industrial-threat

Google Warns Of A Rapid Escalation

AI-powered hacking has moved from an emerging concern to an industrial-scale threat in just three months, according to a new report from Google’s threat intelligence group. The findings add urgency to a global debate over how advanced AI models are reshaping cybersecurity risks.

The report says the latest models are highly capable at coding and are becoming powerful tools for exploiting weaknesses across a wide range of software systems. For companies, governments and investors, the warning highlights a fast-changing threat environment in which cyber risk is becoming more automated, scalable and difficult to contain.

Criminal And State Actors Use AI Tools

Google found that criminal groups and state-linked actors from China, North Korea and Russia appear to be using commercial AI models to refine and expand cyberattacks. The tools named in the report include Gemini, Claude and OpenAI products.

John Hultquist, the group’s chief analyst, said the AI vulnerability race is not a future issue, but one that has already begun. He warned that threat actors are using AI to increase the speed, scale and sophistication of their operations, including malware development, target persistence and attack testing.

Zero-Day Risks Move Into Focus

The report follows warnings from Anthropic, which recently declined to release one of its newest models, Mythos, after saying it had extremely powerful capabilities. The company said the model had identified zero-day vulnerabilities in every major operating system and every major web browser.

Zero-day vulnerabilities are software flaws unknown to the developers of the affected products. Anthropic said the discoveries required substantial coordinated defensive action across the industry, reflecting the potential danger if such capabilities are misused.

Google Sees Threat Beyond One Model

Google’s report suggests the risk is not limited to Mythos. It found that a criminal group recently came close to using a zero-day vulnerability for a mass exploitation campaign, and that the group appeared to be using a large language model other than Mythos.

The report also said some groups were experimenting with OpenClaw, an AI tool that gained attention in February for allowing users to hand over broad personal tasks to an AI agent with limited safeguards and a tendency to delete email inboxes in bulk.

AI Also Offers Defensive Potential

Steven Murdoch, a professor of security engineering at University College London, said AI can help defenders as well as attackers. He argued that software bug discovery is entering a new phase in which large language models will assist both sides of cybersecurity work.

That dual-use nature creates a complex market and policy challenge. AI can make security teams faster and more effective, but it can also lower the barrier for attackers, expand the number of exploitable targets and increase pressure on organizations to modernize their defenses.

Productivity Claims Face Scrutiny

The cybersecurity warning comes as broader claims about AI-driven productivity gains face fresh criticism. The Ada Lovelace Institute cautioned against assuming large public sector savings from AI, after the UK government estimated a 45 billion pound gain from digital tools and AI investment.

The institute said many productivity studies focus on time savings or cost reductions while failing to measure outcomes such as service quality or worker wellbeing. It warned that major policy decisions may rely on untested assumptions and limited methodologies. For investors and policymakers, the message is twofold: AI may create powerful economic efficiencies, but its risks, evidence gaps and security costs are becoming harder to ignore.