ComplianceDecember 03, 2025

Automated fraud and automated attacks: How AI agents are changing cybersecurity

Not long after AI agents became widely available, two major frauds were disclosed within weeks of each other. In one, a person with no technical expertise, working completely alone, used the chatbot Claude to “vibe code” malware, identify targets, and launch an attack that hit 17 different organizations. The other case found that more than 320 companies had unknowingly hired North Korean operatives as remote workers after being fooled by false personas created by AI.

We’ve entered an era where complex fraud no longer requires technical skill, and once set up, machines no longer need to wait for instructions. AI agents increasingly make decisions faster, more efficiently, and sometimes more effectively than humans. AI agents now write emails, negotiate contracts, respond to customers, and even run parts of financial systems with minimal oversight.

As with every leap in technology, the same innovation empowering organizations can also be weaponized. The rise of AI agents has brought a new form of cybersecurity risk. Automated fraud and automated attacks happen faster, adapt instantly, and scale far beyond human capacity. With this new technology, fraudsters don’t just hack systems, they train AI agents to perform automated attacks relentlessly until they eventually find a crack in the armor to exploit. The good news is that strong defenses can still be built. In this article, we will explain the basics of AI agents, how fraudsters utilize them as tools, and opportunities to leverage them to prevent and detect fraud.

Understanding AI agents and their capabilities

AI agents are autonomous digital systems capable of observing, reasoning, and acting. Unlike traditional automation scripts, which follow predefined rules, AI agents can perceive environments, make decisions, and learn from results.

There are three broad categories to consider:

  • Task-based agents complete a defined goal, such as approving invoices or responding to tickets.
  • Generative agents create new outputs, such as code, images, or written content.
  • Multi-agent systems where multiple AIs collaborate and sometimes negotiate or compete.

Autonomy makes AI agents invaluable for businesses that are always looking for ways to be more efficient, but AI agents are also extremely dangerous if used maliciously. A single malicious AI agent can now execute automated attacks across multiple systems simultaneously or perform automated fraud across thousands of transactions without human assistance.

The bright side: AI agents as protectors

Before we look at how things go wrong, it’s important to understand why AI agents have become indispensable. Organizations across all sizes and industries, both public and private, are deploying AI agents. Some example use cases include:

  • Detect anomalies in financial data before fraud occurs
  • Correlate security alerts across systems faster than humans
  • Predict cyber threats by analyzing patterns across millions of data points
  • Automate investigations and reduce analyst fatigue

For example, AI-powered fraud detection agents can perform IT security monitoring and control procedures. Current monitoring could include continuously tracking user behavior, flagging abnormalities such as deviations in login times, downloading large data volumes, or attempting to log into the network from unfamiliar locations. In cybersecurity, AI agents can operate continuously to prevent and detect threat actors, as well as to isolate compromised devices before malware spreads.

AI agents already save billions of dollars annually. Unfortunately, the same speed, autonomy, and adaptability features that make them powerful are precisely what enable automated fraud and automated attacks to be so effective when AI falls into the wrong hands.

The dark side: Rise of automated fraud

Automated fraud occurs when criminals use AI agents to perform or scale fraudulent activity without direct human intervention. Instead of sending just one phishing email, an AI agent is capable of sending and personalizing millions of emails, each tailored to the target’s language, tone, and behavior. Even if the most sophisticated controls can prevent 90% of phishing attacks, the 10% that slip through account for tens of billions in losses.

Modern AI systems fraudulently generate synthetic identities, deepfake voices, and forged documents so realistic that even trained professionals struggle to detect them. Some of the most prevalent schemes include:

  • AI-generated phishing: Fraudsters deploy natural-language models to craft convincing emails that evade spam filters.
  • Deepfake CEO scams: Voice-cloning AI mimics executives, instructing employees to wire money or reveal credentials.
  • Synthetic identity fraud: Machine learning combines stolen personal data to create entirely new identities, complete with social media footprints and transaction history.
  • Invoice automation scams: Malicious AI agents alter payment instructions in legitimate invoices, rerouting funds before detection.

Recently, a multinational firm lost over $25 million when employees followed directions they believed were given during a video call with the CFO, but it was actually a deepfake created by AI. The fraudster’s agent had not only cloned the CFO’s face and voice but also responded intelligently to staff questions in real time. This wasn’t some digital con artist with a script. It was an autonomous fraud agent executing automated fraud across multiple communication channels simultaneously.

View a demo

AI agents behind automated attacks

While traditional cyberattacks rely on manual configuration and execution, automated attacks utilize AI agents that continuously adapt by scanning networks, adjusting tactics, and identifying vulnerabilities on their own. AI agents gather information by crawling the internet for vulnerable endpoints, analyzing patch histories, and mapping expected user behaviors. Once a target is found, the AI selects or generates an appropriate attack pattern or exploit. The agent learns from each attempt, adjusting its attack and improving its success rate over time. The agent uses natural-language generation to disguise phishing content or modify malware signatures to evade antivirus tools. Examples of automated attacks include:

  • AI-driven botnets: Modern botnets now use AI to coordinate attacks intelligently, adjusting frequency to bypass detection systems.
  • Adaptive malware: Some strains rewrite their own code based on system defenses.
  • Autonomous social engineering: Chatbots posing as customer service agents lure victims into revealing credentials.

Automated attacks are not only faster, but also smarter, stealthier, and self-improving. The ability to run continuously while learning from failures and adjusting the attack pattern makes malicious AI agents a significant threat that everyone in an assurance role needs to understand.

How AI agents are used to commit fraud

While early AI fraud relied on simple automation, modern fraudsters now deploy custom-trained AI models optimized for deception. They even created tools to build automated fraud agents on platforms that work like ChatGPT. In underground forums, FraudGPT and WormGPT have appeared as malicious AI tools capable of generating phishing scripts, fake websites, or social engineering templates. These dark AI agents can:

  • Write self-adjusting malware that is undetectable by standard security tools
  • Craft sophisticated business email compromise messages
  • Simulate financial transactions to bypass fraud detection thresholds
  • Create synthetic identities to pose as a human

Dark-web vendors now rent access to malicious AI models for as little as a few dollars per hour, enabling virtually anyone to utilize advanced technology for fraudulent purposes without any expertise, simply by prompting an AI agent.

AI agents used to detect and prevent fraud

Fortunately, the same technology is being used to fight back. AI-driven fraud detection systems analyze massive volumes of transactions in real time, learning normal patterns, and flagging anomalies such as:

  • Sudden spending spikes
  • Unusual IP or geolocation access
  • Repeated login failures
  • Micro-transactions testing card validity

Older security tools like intrusion detection systems and data loss prevention are good at identifying anomalies, quarantining the problem, and sending alerts to the right people. The AI agent-based solutions go further by acting. Many systems automatically freeze transactions, request multi-factor authentication, or escalate suspicious activity for review.

Newer AI models can now incorporate behavioral analysis to differentiate between human and bot activity. A simple example would be to look for a transaction from a vendor who normally invoices at the end of the month, but now you receive invoices mid-month with a new bank account. A more complex example could be analyzing inbound emails for text that includes language, tone, links, attachments, or other contextual signals for each individual that do not match a previously established baseline for how that person typically sounds.

The role of internal audit

As AI agents become integral to business operations, the control environment must evolve to address automated fraud and automated attack risks. Even as AI grows more autonomous, humans remain accountable. AI agents may detect anomalies, but humans must interpret context and consequences.

As internal auditors step into this new field, there is much to learn. For now, remember the three principles of responsible AI oversight:

  1. Transparency: Every AI decision should be explainable.
  2. Accountability: Organizations must own the outcomes of their AI systems.
  3. Proportionality: Not every task should be automated, especially those involving ethical judgment.

Internal auditors play a crucial role in this process, validating AI controls, assessing ethical implications, and ensuring the organization’s AI strategy aligns with its governance and risk appetite. Internal auditors should evaluate AI systems with the same rigor as financial controls. Key controls include:

  • Access controls: Who can deploy or modify AI models?
  • Change controls: Are all changes tested, approved, and deployed appropriately?
  • Audit trails: Are actions traceable and explainable?
  • Bias and transparency: Does the AI make fair, documented decisions?
  • Human-in-the-loop oversight: Are humans still able to override AI decisions?

Start any review by evaluating these basic controls and then expand into the automated and control solutions enabled by AI agents. Then, set up walkthroughs with IT Security to understand the organization’s current threat detection tools and automation, such as the Intrusion Detection System (IDS), Data Loss Prevention (DLP), Security Information and Event Management (SIEM), and Security Orchestration, Automation, and Response (SOAR). Understanding how these systems operate to detect and prevent cyberattacks will guide your options for testing effectiveness.

Winning the AI arms race

The rise of AI agents is both a blessing and a curse. On one side, autonomous systems promise a future of efficiency, precision, and continuous protection. On the other hand, they introduce new forms of automated fraud and automated attacks that evolve faster than traditional defenses can react.

Organizations should not avoid AI, but we all need to use it wisely. By embedding strong governance, continuous monitoring, and ethical design, organizations can ensure their AI agents defend rather than deceive. The next generation of cybersecurity will not come from human analysts alone, but from a partnership between ethical AI agents and accountable humans. Ultimately, victory will belong to the side that automates smarter, not faster.

Subscribe below to receive monthly Expert Insights in your inbox

Missing the form below?

To see the form, you will need to change your cookie settings. Click the button below to update your preferences to accept all cookies. For more information, please review our Privacy & Cookie Notice.

For auditors who are challenged to improve audit productivity while delivering strategic insights, TeamMate provides expert solutions, delivered with premium professional services, to auditors around the globe and in every industry.
Back To Top