
AI為網路犯罪的「第五波」注入強大動力
根據Group-IB的報告,AI正推動網路犯罪進入一個名為「武器化AI」的「第五波」新時代,透過低至5美元的深度偽造工具和合成身份套件,使複雜攻擊變得更便宜、更快且更具規模。
AI Supercharges Attacks in Cybercrime's New 'Fifth Wave'
Written by

Kevin Poireault
Reporter, Infosecurity Magazine
AI is powering a “fifth wave” in the evolution of cybercrime, offering inexpensive, ready-made malicious tools enabling sophisticated attacks, according to Group-IB.
In its latest report, published on January 20, the Singapore-based cybersecurity firm divided the history of cybercrime in four phases, from the opportunistic malware and viruses of the 1990s and early 2000s to “ecosystem and supply chain attacks” wave that marked the 2010s and 2020s.
Since 2022, the firm argued, cybercrime has entered a fifth wave, which it called “weaponized AI.”
This new era is marked by the rapid adoption of AI and generative AI (GenAI) tools by attackers that “turn human skills into scalable services” and make cybercrime “cheaper, faster and more scalable,” Dmitry Volkov, Group-IB’s CEO, said in the report’s foreword.
Black Market Deepfake Kits Fuel Cybercrime for as Little as $5
One of the most striking misuses of GenAI, Group-IB argued, is in the creation of fake synthetic content impersonating real people.
This content can be used lure other trusting people to execute tasks or to bypass authentication processes and know your customer (KYC) systems to gain access to devices, steal money or steal data.
For instance, Group-IB analysts found “synthetic identity kits” offering AI video actors, cloned voices and even biometric datasets for as little as $5 and deepfake-as-a-service offerings for subscriptions starting at $10 per month.
Additionally, the analysts recorded a spike in discussions about such AI-powered tools for criminal purposes in dark web forums over the past three years, from an average of below 50,000 messages on this topic from 2020 to 2022 to approximately 300,000 messages every year since 2023.
During the report’s launch event in London, Anton Ushakov Group-IB’s cybercrime investigation unit leader, said these ready-made kits have become “a commodity” on dark web marketplaces.
“What is really interesting is that not only pre-recorded deepfakes are popular, but also cheap tools enabling live deepfake schemes,” he added.
“Of course, these will not convince 90% of people, but if it works in 5% to 10% of cases, it can be lucrative enough at this stage,” he noted.
Read more: World Economic Forum: Deepfake Face-Swapping Tools Are Creating Critical Security Risks
Phishing Kits Enter the Agentic AI Era
Another major use of AI by cybercriminals highlighted in the Group-IB report is for phishing.
Phishing kits are now listed at prices ranging “from as little as a Netflix subscription to $200 per month, making them accessible and affordable to groups big and small,” said the report.
Ushakov’s team found that the new malicious AI capabilities are now used beyond simply assisting the attacker in the production of believable phishing emails.
“AI is not only changing how phishing is generated, handled, hosted and run, but the way it’s distributed,” Ushakov said.
He explained that, previously, criminals using phishing-as-a-service (PhaaS) kits would still need to configure everything, including SMTP servers and list of victims and run those campaigns.
“Now, with the help of AI, and especially the open-weight models that are accessible, criminals are building the tools to automate these tasks,” Ushakov started.
“They embed the models into the tools that are helping to scale and automate phishing campaigns in terms of the delivery. The models provide them with the list of the victims and sort of narrative that they want to use for the lures,” he continued.
Group-IB found one service that “agentizes the phishing campaigns.” This tool uses AI agents to develop lures, send phishing emails to victims and returns information to the criminals with feedback, allowing them to adapt the campaign over time.
“On the victim’s side, all the malicious emails feel personal and new ones keep being sent out by the phishing kit’s agent,” said Ushakov, who noted that the ‘agentized’ phishing kit appears to still be in a testing and development phase.
Dark LLMs Grow in Sophistication
Finally, Group-IB analysts also found that threat actors are moving past chatbot misuse and are creating proprietary “dark large language models” (LLMs) that are more stable, capable and have no ethical restrictions.
From early experiments of rudimentary, open-access dark LLMs like WormGPT, these tools have now evolved into custom-built, self-hosted AI models optimized for generating harmful content, including malware, scams and disinformation.
They have no ethical restrictions and are often fine-tuned on scam linguistics or malicious code and datasets.
The dark LLMs assist in various cybercriminal activities, including:
The analysts identified at least three active vendors offering dark LLMs with subscriptions ranging from $30 to $200 per month, and a customer base exceeding 1000 users.
One example, called Nytheon AI, is an unrestricted, self-hosted AI chatbot promoted on dark web forums as a fully offline, 80-billion-parameter, locally-hosted hybrid LLM hosted over TOR and blending open-source models like DeepSeek-v3, Mistral, Llama v3 Vision and some others.
In April 2025, Group-IB investigations confirmed the sale of Nytheon AI on Telegram channels through a subscription-based model. Designed to provide uncensored chatbot responses, its advertised use cases include helping to develop malware, penetration testing, vulnerability research, fraud schemes and unfiltered information queries.
The cybersecurity firm validated Nytheon AI’s AI functionality, technical capabilities and lack of ethical restrictions.
Craig Jones, former Interpol director of cybercrime and independent strategic advisor for Group-IB argued that, while “AI hasn’t created new motives for cybercriminals,” it has industrialized cybercrime by “dramatically increasing the speed, scale and sophistication with which those motives are pursued.”
“What once required skilled operators and time can now be bought, automated and scaled globally. That shift marks a new era, where speed, volume, and sophisticated impersonation fundamentally change how crime is committed and how hard it is to stop,” he concluded.
You may also like
ENISA Warns of Rising AI Manipulation Ahead of Upcoming European Elections
UK Government Cybersecurity Advisory Board Applications Now Open
Measure ROI of Phishing Awareness and Education Training
Malware Manipulates AI Detection in Latest npm Package Breach
AI Attacks Surge as Microsoft Process 100 Trillion Signals Daily
What’s Hot on Infosecurity Magazine?
ICE Agent Doxxing Site DDoS-ed Via Russian Servers
Account Compromise Surged 389% in 2025, Says eSentire
CodeBuild Flaw Put AWS Console Supply Chain At Risk
TamperedChef Malvertising Campaign Drops Malware via Fake PDF Manuals
Cyber Threat Actors Ramp Up Attacks on Industrial Environments
Criminal Subscription Service Behind AI-Powered Cyber-Attacks Taken Out By Microsoft
World Economic Forum: Cyber-fraud overtakes ransomware as business leaders' top cyber-security concern
Parliament Asks Security Pros to Shape Cyber Security and Resilience Bill
Global Magecart Campaign Targets Six Card Networks
ICE Agent Doxxing Site DDoS-ed Via Russian Servers
Criminal Subscription Service Behind AI-Powered Cyber-Attacks Taken Out By Microsoft
Infosecurity's Top 10 Cybersecurity Stories of 2025
Revisiting CIA: Developing Your Security Strategy in the SaaS Shared Reality
Risk-Based IT Compliance: The Case for Business-Driven Cyber Risk Quantification
How Mid-Market Businesses Can Leverage Microsoft Security for Proactive Defenses
Predicting and Prioritizing Cyber Attacks Using Threat Intelligence
Audit & Compliance in the Era of AI and Emerging Technology
The Threat Intelligence Imperative: Transforming Risk into Cyber Resilience
Regulating AI: Where Should the Line Be Drawn?
What Is Vibe Coding? Collins’ Word of the Year Spotlights AI’s Role and Risks in Software
Risk-Based IT Compliance: The Case for Business-Driven Cyber Risk Quantification
Bridging the Divide: Actionable Strategies to Secure Your SaaS Environments
NCSC Set to Retire Web Check and Mail Check Tools
Beyond Bug Bounties: How Private Researchers Are Taking Down Ransomware Operations
The magazine
Advertisers
Contributors
相關文章
其他收藏 · 0