Explainers

AI-Enhanced Cybercrime: A New Era of Threats

AI is fueling hyper-realistic phishing, dDeepfakes & self-modifying malware, driving massive fraud losses. Zero-trust & AI-driven defense are now critical.
Marouane Sabri
Defendis Co-founder

In late 2025, experts have sounded the alarm: cybercriminals are supercharging traditional scams with artificial intelligence. Generative AI tools can automate and perfect fraud and phishing like never before. Realistic synthetic voices and videos make deepfakes indistinguishable from reality, and large language models (LLMs) can churn out tailored phishing messages instantly. This fusion of AI and crime is creating a “perfect storm”- scalable, adaptive attacks that exploit trust and technology simultaneously. The result is a dramatic rise in sophisticated fraud, executive impersonation, and even self-modifying malware, all powered by AI.

Attribution

These AI-driven threats are carried out by both organized criminals and nation-state actors. According to Google Threat Intelligence, groups from North Korea, Iran, and others have already embedded AI (such as Google’s Gemini model) into their operations . However, much of the surge comes from faceless cybercrime syndicates. For example, the Kansas hostage-call scam was traced to overseas fraudsters who use AI voice-cloners and spoofing to remain anonymous. In one documented case, a Hong Kong finance worker was duped by what appeared to be her CEO on a video call, later found to be AI-generated. In short, attackers of all stripes are weaponizing AI from advanced “APT” teams to street-level scammers seeking quick gains.

Technical Walk-through
AI in Malware

Traditionally, malware was static: its behavior was fixed at build-time. Now, a new class of adaptive malware is emerging. GTIG has identified families like PROMPTFLUX and PROMPTSTEAL that invoke an LLM during execution. For instance, PROMPTFLUX (a VBScript dropper) periodically sends a crafted prompt to Google’s Gemini API requesting “obfuscated VBScript code designed for antivirus evasion.” The AI returns fresh code snippets that PROMPTFLUX then executes. In effect, the malware “reprograms” itself on the fly. This dynamic obfuscation makes detection much harder: antivirus signatures quickly become outdated as the code continually evolves.

AI in Phishing and Social Engineering

In phishing campaigns, AI enriches every step. Attackers first harvest open-source intelligence: public LinkedIn profiles, social media posts, even voicemail greetings. They feed this data into an LLM to build rich target profiles. Next, the AI generates customized lures: hyper-personalized emails, SMS or instant messages that mimic the victim’s style and reference current projects or local details. For example, one recent scam hid an AI-generated payload inside an SVG image. Microsoft found that criminals had the LLM write complex code that traditional scanners couldn’t flag. The phishing email mimicked a common file-sharing notice and even hid target addresses in BCC to evade filters.

Once the victim engages, the AI can also carry on conversations. In some schemes, chatbots maintain adaptive conversation loops: if the target shows hesitation, the AI tweaks its script or tone in real-time to persuade the victim to act. Finally, AI tools help deploy the payload, whether it’s an info-stealer, ransomware, or credential-harvesting site at scale.

AI in Deepfake Fraud

Another dimension is deepfakes. Criminals scrape audio and video snippets of real people (e.g. executives or relatives) from social media or public talks. Advanced voice-cloning models then produce a synthetic but chillingly accurate voice imitation. In one Kansas case, scammers cloned a mother’s voice and called her daughter, falsely claiming the mother was held hostageaxios.com. They paired the call with caller-ID spoofing and immediate demands for money via gift cards or crypto. The realism of the AI voice caused the victim to panic and even involve police. Similar tactics are used in Business Email Compromise (BEC) scams: AI-generated video calls and synthesized voices “confirm” fraudulent transfer requests, fooling even wary employees.

Effectiveness

AI makes these attacks far more effective and convincing. Ordinary phishing emails once stood out by grammar and generic language, flaws AI now erases. The attack content is hyper-personalized: it can match company jargon, personal interests, or recent activities. Reports show that trained defenders who are warned about AI phishing still fall victim nearly half the time. The quantity of attacks is also exploding: one security firm reported a 1200% global surge in phishing since the rise of generative AI. In short, attackers can launch vast campaigns of realistic-sounding lures at a fraction of the cost and time required before.

Real-world incidents illustrate the stakes. The Kansas hostage-call forced a high-risk police stop (Officers later found the entire scenario was fabricated using an AI voice clone.) In corporate fraud, deepfake CEO scams already netted tens of millions. As one security report notes, a deepfake call or video can overcome normal verifications, leading employees to obey fake instructions. In summary, AI removes many of the friction points that used to alert victims, now attackers sound and behave like real people, making fraud much harder to detect.

Impact

The financial and social impact is mounting.  A recent analysis of 163 deepfake incidents found over $200 million lost in just Q1 2025.  Individual big-loss cases have hit tens of millions (e.g. $25M stolen from an engineering firm via an AI “CFO” deepfake).  On a daily basis, victims worldwide, from elderly individuals to corporate boards, are receiving threat calls and emails they assume are real. Losses aren’t limited to money; trust is eroded. People become wary of legitimate video conferences and office communications.

Certain sectors are especially hard-hit. Corporate finance and procurement teams are prime targets of AI-enhanced BEC. Governments and defense contractors worry about disinformation deepfakes as well.  One report warns educational institutions and women are becoming disproportionate victims of synthetic media attacks. Regardless of sector, no one is immune: any organization that relies on digital communication can be deceived by AI.

Society is also impacted. The Axios report warns that police and emergency services are being sucked into AI hoaxes. For example, real-world response teams have recently chased phantom hostage situations based on cloned voices. These misdirections not only waste public resources but also put responders at risk.

Overall, experts characterize this wave as a systemic shift. Trend Micro predicts 2026 will see scams reach unprecedented AI-driven scale, with automation and emotion-engineered tactics dominating fraud.  As one AI fraud specialist notes, AI “changes the economics on fraud” it lets scammers automate personalized attacks that previously required human effort.

Blind Spots

Defenders face key blind spots. First, visibility: AI tools enable attacks that slip past traditional filters.  Deepfake audio and images can bypass human scrutiny; a clip that sounds like a CEO or looks like an executive on video might be assumed authentic.  An analyst notes “seeing is believing” no longer holds true.  As one security brief advises, businesses can no longer trust media at face value they must adopt a “zero trust” mindset for all identities.

Second, volume and speed: The sheer number of AI-generated lures swamps analysts. A campaign can spin up thousands of variants instantly, making manual review impossible. Many organizations lack specific AI-threat detection tools, so these novel attacks can slip through until it’s too late.

Third, social engineering gaps: Training and policy haven’t caught up. Standard protocols (spell-checking emails, or verifying odd requests via phone) often fail. For instance, if a finance manager receives what sounds like their CFO’s voice on a call, instinct tells them it’s real.  Even security-aware employees can be manipulated by multi-stage attacks: an email with subtle cues, followed by a call or video confirming the demand.

Finally, many regions still lack clear regulations around synthetic media. Without legal deterrents or shared intelligence, it’s easier for attackers to innovate unchecked.

Future Implications

This trend is expected to intensify. According to Google’s GTIG, AI-driven threat techniques “are only now starting to appear, but [they] expect them to increase in the future”.  Likewise, Trend Micro’s 2026 outlook warns of multi-channel, emotionally driven scams where AI clones voices and chats guide victims across platforms. In practice, that could mean cybercriminals combining social networks, messaging apps and email in long fraud chains powered by AI chatbots and deepfakes.

As AI models become more powerful and accessible, barriers to entry fall. GTIG notes a maturing underground market for “AI-as-a-service” tools: phishing kits and malware generators powered by LLMs are already advertised on hacking forums. This democratization means even low-skilled scammers can mount sophisticated attacks. We may see AI used for new exploits too (e.g. automating vulnerability discovery or spear-fishing large-scale).

At the societal level, unchecked deepfake proliferation could erode trust in media and institutions. Lawmakers are beginning to act (for instance, recent “deepfake laws” and AI regulations in the EU and US), but the technology is moving faster. The coming years will likely see an arms race between AI offense and defense.

Defensive Outlook

Despite the risks, defenders have options. Detection tools are evolving: AI-based security systems can spot anomalous language patterns or synthetic voices. For example, advanced email filters can flag writing that deviates from an executive’s normal style.   Voice and video authentication systems (multi-factor with biometrics, watermarks for media) can help verify legitimacy.  Cisco and others are developing AI models to recognize deepfake artifacts, and some banks now require secondary voice-print verification for high-value transfers.

Policies and training must also adapt. Organizations are advised to train employees on AI threats: simulate phishing with AI-generated lures, and emphasize strict protocols even when requests seem familiar.  As Arctic Wolf recommends, verify any wire transfer request through independent channels and use phishing-resistant authentication (like FIDO2 MFA) for sensitive accounts. For critical communications (e.g. financial approvals), in-person or video checks with known faces can thwart many deepfake ploys.

On a higher level, a zero trust” mindset is crucial. That means always verifying identities, scrutinizing requests from any channel, and assuming that AI can mimic any colleague or loved one. Companies should also collaborate with law enforcement and industry groups: sharing intelligence on new AI-scam tactics can speed up defenses.

In short, while AI-powered threats are sophisticated, leveraging AI in security (AI-driven email filtering, voice authentication, anomaly detection) can help close the gap. Awareness and agile response strategies will be key. As experts note, the technology that empowers these attacks must also be used to protect against them.

About the author
Marouane Sabri is the Co-Founder and Chief Marketing Officer of Defendis. With a background in communications and digital strategy, he leads Defendis’ market expansion.

Related Articles

Discover simplified
Cyber Risk Management

Request access and learn how we can help you prevent cyberattacks proactively.