The rise of AI as a weapon for cybercriminals
Social engineering is as old as humanity itself. But in 2026, this attack vector has undergone a transformation that concerns even the most seasoned security professionals. Artificial intelligence now enables criminals to craft attacks so sophisticated that traditional detection methods — and human judgement — fall short.
The numbers speak for themselves: recent research from the European Union Agency for Cybersecurity (ENISA) shows that AI-assisted phishing campaigns increased by 340% over the past year. What makes these attacks particularly dangerous is not just their volume, but their quality. The era of phishing emails riddled with spelling mistakes and awkward phrasing is over.
Deepfakes: when your eyes and ears deceive you
One of the most alarming developments is the deployment of deepfake technology in targeted attacks. In January 2026, a British financial firm was defrauded of €23 million after criminals staged a fully convincing video conference. The CFO believed they were speaking with the CEO and two directors — all three turned out to be AI-generated deepfakes.
This is no longer science fiction. The technology to produce convincing video and audio deepfakes is widely available and requires increasingly less technical expertise. A criminal needs only a few minutes of speech material — often found in podcasts, webinars, or social media — to produce a convincing voice clone.
The anatomy of a deepfake attack
- Reconnaissance: The attacker gathers public information about the target and organisation via LinkedIn, corporate websites, and social media
- Material collection: Speech fragments, videos, and photographs of the person to be impersonated are collected
- Synthesis: AI tools generate realistic audio or video of the impersonated individual
- Execution: The target receives a seemingly legitimate phone call or video meeting
- Manipulation: Under time pressure and trusting the 'familiar' conversation partner, the victim is compelled to act
AI-generated phishing: personal, contextual, and convincing
Beyond deepfakes, AI has fundamentally transformed the classic phishing email. Large language models enable attackers to:
- Write personalised emails that reference actual projects, colleagues, and recent events
- Perfectly mimic writing style, including informal expressions and typical writing habits of the impersonated sender
- Create contextual urgency that aligns with the organisation's current activities
- Execute multilingual campaigns without language errors — a traditional warning sign that now disappears
A security researcher recently demonstrated how an AI agent could set up a complete spear-phishing campaign within 15 minutes: it analysed LinkedIn profiles, identified recent company events, and generated tailored emails for 50 employees — each unique and contextually relevant.
Why traditional training no longer suffices
Most security awareness programmes are built on pattern recognition: suspicious senders, language errors, unusual requests. But AI-powered attacks eliminate precisely these recognition points.
Research from Stanford University reveals that even trained employees fail to distinguish AI-generated phishing emails from legitimate communications in 68% of cases. This percentage is higher than with traditional phishing (where the detection rate was around 45%).
The problem runs deeper than technology alone. Humans are biologically programmed to obey authority, feel urgency, and trust familiar faces. AI attacks exploit these psychological triggers with surgical precision.
The behavioural science approach: looking beyond the surface
When patterns are no longer reliable, organisations must look deeper. Not at what employees see or hear, but at how they make decisions under pressure.
Behavioural analysis — such as the Q-Method applied by Nexus-7 — offers a fundamentally different approach. Rather than giving everyone the same training, individual behavioural profiles are created that map:
- Risk appetite: How quickly does someone make decisions under pressure?
- Authority sensitivity: To what extent does someone follow instructions from superiors without verification?
- Pattern recognition: How well does someone detect subtle anomalies in communication?
- Social susceptibility: How vulnerable is someone to peer pressure or urgency arguments?
These profiles enable personalised security measures. An employee who scores high on authority sensitivity receives additional verification steps for requests from management. Someone who reacts impulsively to urgency gets mandatory waiting periods for financial transactions.
Practical measures for organisations
Technological defence
- Implement verification protocols for all financial transactions and sensitive requests, regardless of communication channel
- Use digital signatures for internal communications on critical matters
- Deploy AI detection tools capable of identifying deepfakes and AI-generated text
- Monitor communication patterns using anomaly detection
Human defence
- Conduct behavioural assessments to map individual vulnerabilities
- Tailor training per profile instead of generic awareness programmes
- Create a verification culture where double-checking requests is normal
- Practise with realistic simulations that replicate AI-generated attacks
- Establish clear escalation procedures for unusual requests
Organisational defence
- Limit public information about internal processes, hierarchy, and projects
- Implement the four-eyes principle for all transactions above a threshold amount
- Conduct regular red team exercises using AI tools
- Evaluate suppliers on their resilience against AI-powered attacks
The future: an arms race that requires human insight
The battle between attackers and defenders is an arms race, and AI has fundamentally changed the playing field. Technology alone cannot win this fight — the attacks have become too human for that.
Organisations seeking to strengthen their resilience must invest in understanding human behaviour. Not as an afterthought, but as a core component of their security strategy. Behavioural analysis provides the insight needed to not only train employees, but make them truly resilient — even against attacks they cannot recognise as such.
The question is no longer whether your organisation will face AI-powered social engineering, but when. And the answer to that threat begins with understanding the people behind the screens — your own employees.