Published on 7 November 2025
How artificial intelligence is redefining both cyberattacks and defence strategies
Introduction: when AI becomes a weapon and a shield
The emergence of artificial intelligence in cybersecurity is not an evolution, it is a disruption. What were once manual processes, static rules, or heuristic models are now autonomous decisions, continuous learning, and the large-scale generation of offensive and defensive actions.
AI does not only empower defenders. It also empowers attackers.
Adversaries have begun using generative models to automate campaigns, create polymorphic malware, and produce increasingly convincing deepfakes. At the same time, security professionals are integrating AI to detect anomalies, anticipate threats, and respond within seconds.
We are now operating on a new digital battlefield, where the most sophisticated algorithm and the most context-aware data determine the winner. This article analyses how AI is deployed on all fronts: the offensive, the defensive, and the introspective (the security of the model itself).
1. AI-powered attacks: from perfect impersonation to malware designed by LLMs
Phishing 5.0: more credible, more personalised and more automated
Phishing is no longer a crude threat. With models such as GPT-4, Claude and Gemini, attackers can generate thousands of personalised, contextually relevant and error-free messages.
What has changed with AI?
- Emails tailored to actual company roles: “Hi Carlos, as you mentioned yesterday in the meeting about quarter-end closing…”
- Use of internal style, tone, and language extracted from leaked emails, LinkedIn profiles, or corporate websites.
- Automated responses within long email threads (thread hijacking), maintaining accurate semantic context.
Real example: In 2024, a campaign targeting financial controllers was uncovered, where emails included real internal meeting details leaked from compromised accounts, generating a click-through rate of 68%.
Real-time deepfakes: when your CEO is not who they say they are
Deepfakes have evolved from political satire to advanced social engineering tools. Voices cloned with open-source tools (such as ElevenLabs, Resemble.ai) and real-time video generation allow attackers to:
- Simulate calls from a CEO authorising urgent transfers.
- Manipulate voice authentication systems.
- Join corporate video calls using synchronised digital faces in real time.
Documented case: In 2023, an Asian company transferred $25 million following a fake video call with its CEO and CFO, both convincingly cloned via deepfakes trained on public videos and leaked internal emails.
AI-generated malware: offensives without coders
While LLMs aren't designed to generate malware, attackers are adapting them or using reverse engineering to:
- Create keyloggers, data exfiltration payloads, evasive scripts
- Automate the generation of new malware variants with every execution (AI-driven polymorphism).
- Chain attack phases: reconnaissance, exploitation, persistence, and exfiltration.
Some actors employ locally fine-tuned LLMs trained on offensive datasets (from GitHub, CTFs, etc.), combining generative AI with dynamic evasion techniques to bypass EDRs.
Offensive AI no longer requires technical expertise. Just data and intent.
2. AI-powered defence: from EDR to autonomous SOC
Behaviour-based detection: from pattern to context
Modern EDR/XDR solutions (such as CrowdStrike, Microsoft Defender for Endpoint, SentinelOne) have integrated AI to analyse:
- Unusual scripting activity.
- Changes in user behaviour (UEBA).
- Subtle patterns of internal reconnaissance before an attack.
Practical example: An employee downloads a .zip file with macros from a legitimate site. The model detects that the process running it opens PowerShell, connects to a new domain and exfiltrates encrypted data: an anomalous combination that triggers an alert.
Autonomous SOCs: from analyst to digital copilot
SOCs are evolving towards SoCless or AI-driven models, with tools such as:
- Autonomous orchestration: automatic device isolation upon detecting certain behaviours.
- Alert summary and triage using LLMs: automatic prioritisation based on impact, user, and context.
- Conversational co-pilots to interact with logs (“Which endpoints connected to the C2 between 3 and 6 AM?”).
Companies like Microsoft, Palo Alto Networks, and IBM are already integrating AI copilots into their SIEM/XDR platforms to accelerate forensic analysis, ticket writing, and report generation.
Emerging real-world cases
- Predictive models in active honeypots, which anticipate attacker movements.
- Global campaign analysis using sample clustering and graph ML.
- Proactive incident simulation based on real internal data (cybersecurity as a digital twin).
AI does not replace the analyst. It empowers them, frees them, and turns them into strategists.
3. Risks in the lifecycle of AI models themselves
Specific threats against LLMs and AI systems
AI systems, especially LLMs, represent a new attack surface. If not properly secured, they can become a source of leakage, manipulation, or entry point for attackers.
Main risks:
- Prompt injection: the model is manipulated via hidden instructions in user inputs that alter its behaviour.
- Data leakage: exposure of confidential information from training data or previous interactions.
- Model inversion & extraction: recovery of training examples or functional cloning of the model.
- Adversarial inputs: inputs specially designed to cause erroneous inferences (such as manipulated images to bypass biometric systems).
Key controls to secure AI models
- Input/output validation: limit dangerous instructions, apply semantic filters.
- Model red teaming: controlled offensive testing to discover weaknesses (as done by OpenAI or Anthropic).
- Logging and traceability: record all interactions and decisions for auditing purposes.
- Encryption and segmentation: protection of the model, embeddings and metadata.
The new approach is Machine Learning Security Operations (MLSecOps): secure the model lifecycle with policies, controls, and abuse detection.
Conclusion: AI is not the future of cybersecurity. It is the present
We have crossed the threshold. In 2025, AI is no longer a promise of innovation but a condition for digital survival.
The organisations that will lead the market won’t be those spending the most on tools, but those who:
- Train their teams in artificial intelligence applied to cybersecurity.
- Align their defensive architectures with distributed, ethical, and auditable AI.
- Secure their own models as critical assets, not as black boxes.
Because the adversary already has AI. The question is: do you?






