The leading American cybersecurity technology firm's 2026 Global Threat Report reveals an 89 per cent jump in attacks using AI over the past year, with criminals increasingly treating AI systems themselves as prime targets.
CrowdStrike says threat actors are no longer just using AI to enhance traditional hacking techniques — they are actively compromising the tools organisations rely on. Investigators found that cyber criminals had injected malicious prompts into legitimate generative‑AI platforms at more than 90 companies, manipulating the systems into producing commands designed to steal credentials and cryptocurrency.
The report also details cases where attackers exploited weaknesses in AI development environments to gain persistence inside corporate networks, later deploying ransomware. In other incidents, criminals set up fake AI servers disguised as trusted services, enabling them to intercept sensitive data flowing through enterprise systems.
CrowdStrike CEO George Kurtz said: “As AI is embedded into development pipelines, SaaS platforms, and operational workflows, AI systems themselves become part of the attack surface.
“Adversaries exploited legitimate AI tools by injecting malicious prompts that generated unauthorised commands. As innovation accelerates, exploitation follows.”
CrowdStrike’s findings highlight a growing concern across the cybersecurity industry: as companies rush to integrate AI into everyday operations, attackers are just as quick to weaponise the same technologies. The report suggests that without stronger safeguards, AI‑driven attacks will continue to escalate in both scale and sophistication.