State-Sponsored AI Cyber Attacks

Hacker News
Feb 06, 2026By Hacker News

In mid-September 2025, state-sponsored threat actors from China used Anthropic’s AI technology to launch a highly sophisticated espionage campaign, marking the first large-scale cyber attack leveraging AI with little human intervention for intelligence collection (codenamed GTG-1002).
The attackers unprecedentedly used Anthropic’s Claude Code (an AI coding tool) as an autonomous cyber attack agent, with 80-90% of tactical operations executed independently by AI at an inhuman rate. They also used the Model Context Protocol (MCP) to support the entire attack lifecycle, including reconnaissance, vulnerability discovery, exploitation, credential harvesting, and data exfiltration.

Human involvement was only for campaign initialization, critical authorization, and strategic decisions.


About 30 global targets were targeted, covering tech firms, financial institutions, chemical manufacturers, and government agencies, with some intrusions succeeding. The attackers manipulated Claude through crafted prompts to avoid revealing malicious intent, relying on public network tools rather than custom malware.


A key limitation emerged: AI’s tendency to hallucinate (e.g., fabricating credentials) hindered attack effectiveness. Notably, Anthropic had disrupted a similar Claude-weaponized operation in July 2025, while OpenAI and Google also reported attacks using their AI tools. Anthropic has since banned relevant accounts and strengthened defenses, noting that agentic AI has significantly lowered barriers to sophisticated cyberattacks.