AI-Powered Cyberattacks: Trends in an Evolving Threat Landscape
CyberStrikeAI is an AI-based exploit tool, allegedly first deployed by Chinese threat actors in January to compromise misconfigured Fortinet FortiGate firewall endpoints. Security researchers identified that over 600 FortiGate devices across 55 countries were successfully compromised within a single month (January 11 to February 18).
What makes it notable: the tool functions as an intelligent orchestration engine, encompasses more than 100 distinct modules, and is capable of fully automated generation of attack plans, command sequences, and exploitation methods. (GBHackers)
What CyberStrikeAI illustrates is confirmed by the CrowdStrike Global Threat Report 2026: generative AI has already significantly transformed the threat landscape, not only in terms of the speed and scale of attacks, but also in their sophistication. CrowdStrike recorded an 89% increase in cyberattacks by threat actors using AI in 2025. (CrowdStrike Global Threat Report 2026)
Trends in the Use of AI for Cyberattacks
1. Social Engineering
The first widespread use of generative AI was identified in phishing campaigns. Language models generate authentic phishing messages free of grammatical errors, tailored to the writing style of the supposed sender, with accurate context and a convincing tone. HR departments in particular are repeatedly targeted. Attackers direct victims to alleged application documents stored in cloud services (e.g. Dropbox), which in reality contain malware. In addition, attackers use deepfake audio and video to impersonate executives (CEO fraud) or pose as job applicants in virtual interviews.
2. Automated Scanning for Persistence and Lateral Movement
Attackers use AI-powered reconnaissance tools like CyberStrikeAI to systematically analyze target infrastructures and prioritize exploitable vulnerabilities with a speed and precision that surpasses manual approaches. AI can also prove highly valuable to attackers in so-called second-stage scenarios, following an already successful initial breach. Google's Threat Intelligence Team identified a threat actor who abused locally installed LLMs during attacks on developers to systematically search target systems for credentials, tokens, and similar access data. Using this method, the attackers gained administrator access to AWS environments within 72 hours. (Google Blog | The Hacker News)
3. AI-Generated Malware
Generative AI has become a standard component of software development, and its capabilities have advanced rapidly in recent months. Attackers have taken notice. Generative AI significantly lowers the technical barrier to entry: malicious code, exploits, and polymorphic malware can now be created without in-depth programming knowledge. At the same time, AI noticeably accelerates the malware development cycle, shrinking the window between CVE disclosure and active exploitation. The group PUNK SPIDER allegedly used Gemini-generated scripts to extract credentials from Veeam Backup & Replication (VBR) databases, and likely used DeepSeek-generated scripts to terminate database services and eliminate forensic traces. (CrowdStrike Global Threat Report 2026)
What Can IT Security Managers Do?
We have identified five measures that IT security managers can use to better protect their organizations in this evolving threat landscape.
1. Integrate AI threats into the ISMS
An AI-powered attack changes the risk parameters of speed, scale, and detectability. Accordingly, generative AI scenarios including prompt injection, AI-powered social engineering, and AI-generated malware must be incorporated into threat modeling and factored into risk assessments.
2. Update awareness training
Traditional security awareness training teaches employees to recognize attacks by certain indicators: poor grammar, suspicious sender addresses, impersonal salutations. AI changes these patterns. "Bad language" is no longer a reliable warning sign. Deepfakes, AI-generated phishing, and voice fraud must also be integrated into training modules.
3. Review and expand detection logic
CrowdStrike reports that 82% of attacks identified in 2025 were carried out without malware. SIEM and EDR rules must be adapted to address this and the trends outlined above. Detecting known malicious code is no longer sufficient. What is needed is behavior-based anomaly detection. This means systems must be configured and where necessary, trained to recognize new generative AI attack patterns, malware-free intrusions, and anomalous identity usage.
4. Introduce an AI usage policy
Shadow AI is the new Shadow IT. In most organizations, employees are already using dozens of AI tools without the knowledge or oversight of IT security. This creates two risks: first, sensitive company data may flow into external AI systems; second, new attack vectors emerge (e.g. prompt injection). An AI policy is needed that defines which tools are permitted, which data classifications may be used as input, and who may use AI-generated outputs for which decisions. Simply blocking AI tools will only drive more Shadow AI.
5. Threat intelligence reporting The increasing speed at which attackers can operate using generative AI raises the importance of timely threat intelligence. Through the Cyber Security Competence Center (CSCC), we deliver a weekly report on the current threat landscape and provide a framework for IT security managers to exchange insights. In cases of particularly critical threats, we also inform the community through flash reports.
Conclusion
AI is not only changing the tools available to attackers – it is significantly transforming the speed, precision, and scale of cyberattacks. As a result, traditional protective measures are no longer sufficient and must be fundamentally revised. Recognizing these risks is the first step. The second is to derive the right measures and implement them consistently.
Would you like to discuss these topics or try the CSCC Threat Intelligence Service without obligation? As a trial member, you can participate in the community free of charge (NDA-based).
Feel free to contact us at: mail@complion.de
or reach out directly: jan.philipsen@complion.de