ComplianceOctober 22, 2025

Navigating AI cybersecurity risks: The role of internal controls in a threat-driven landscape

AI is no longer a futuristic threat—it’s the fuel driving today’s most sophisticated cyberattacks. From phishing emails that appear to be written by your CEO, to deepfake voice calls that bypass identity checks, attackers are leveraging artificial intelligence (AI) to scale their operations with unprecedented precision. However, defenders are not without their own AI arsenal.

As AI threats continue to evolve, internal auditors and risk professionals must ensure that their internal control frameworks are adaptable and robust enough to strengthen their ability to assess an organization’s preparedness and compliance posture.

Additionally, by learning how to spot AI-driven threats, you can effectively address gaps in your controls, governance, and monitoring capabilities. As AI continues to transform the landscape of cyberthreats and defenses, staying informed and prepared is essential for safeguarding your organization against evolving risks.

Top cybersecurity risks and trends

Cybersecurity threats are becoming increasingly sophisticated, driven by technological advancements and the innovative tactics of cybercriminals. As organizations strive to protect themselves against threat actors, understanding the latest cybersecurity risks and trends is crucial.

AI-powered threat automation

Integrating AI into cybercrime has significantly transformed the speed and scale at which threat actors can now scan for vulnerabilities. According to Fortinet, a cybersecurity and network security company, cybercriminals launched 36,000 automated scans per second in 2024, a 16.7 percent increase from the previous year. This capability allows attackers to rapidly scan thousands of websites for vulnerabilities, execute widespread credential attacks, and craft sophisticated malware at machine speed, resulting in a surge in stolen credentials.

Advanced Ransomware-as-a-Service (RaaS)

Ransomware-as-a-Service has emerged as a cybercrime business model, where ransomware operators sell ransomware code or malware to other hackers, who then use the code to initiate their own cyberattacks. Combined with AI-enhanced social engineering, operators can create ransomware that evades detection and optimizes extortion strategies.

AI-driven supply chain attacks

Supply chain attacks are becoming increasingly mainstream as adversaries exploit vulnerabilities through third-party relationships, often using phishing, smishing (a type of cyberattack that uses deceptive text messages to trick you into revealing personal or financial information or vishing (a type of voice phishing scam where criminals use phone calls to deceive people into giving up personal information). These sophisticated attacks target service providers, vendors, and help desk systems to infiltrate larger networks. A notable example involved a large airline organization, where AI-powered vishing compromised an offshore call center, impacting up to six million customers. As AI technology becomes more advanced, cybercriminals are making their attacks more difficult to detect by creating AI agents with local accents and dialects and reducing the lag time between responses.

Zero trust goes mainstream with AI

Driven by the rise of remote work, organizations are rapidly adopting both zero-trust architectures and mindsets for robust security measures. The zero-trust approach operates on the principle of “don’t trust anything” and requires continuous authentication, least privilege access, and micro-segmentation to counter internal and external threats.

Deepfake and AI-based social engineering

Threat actors can now use GenAI to create hyper-realistic deepfakes that blur the lines between real and fraudulent communication. They can easily replicate a person’s voice or video likeness, producing a limitless set of exploitative scenarios across social media, email, political campaigns, and news videos.

Real-world scenarios of AI attacks and defenses mapped to internal controls

As AI continues to evolve, both new threats and innovative defenses are emerging. Organizations must stay ahead of these developments to protect their data and maintain a secure environment. Below are a variety of real-world scenarios that demonstrate how AI is utilized by both threat actors and defenders, highlighting the relevant controls.

Scenario #1: Crafted phishing emails versus enhanced detection

Threat actors can now leverage large language models (LLMs) to craft phishing emails that look highly convincing and are designed to bypass traditional spam filters. LLMs can understand and generate human-like text by learning patterns in data. Previously, it was easier to flag malicious emails because there was a lack of mastery of the English language (or the predominant language) of the area that the hackers were targeting. Now, LLMs ensure phishing emails are perfectly crafted and grammatically correct.

Organizations can counteract this by deploying AI-powered tools to detect anomalies in metadata, sender patterns, and tone to flag malicious emails. Using AI algorithms, organizations can create scores that represent the statistical probability that an email is fake, as well as determine what they want their score to be, enabling them to find the right balance between risk management and opportunity preservation.

Relevant controls:

  • National Institute of Standards and Technology (NIST) 800-53: SI-4 (System Monitoring) and AT-2 (Security Awareness Training). Learn more about the NIST AI Risk Management Framework here.
  • International Organization for Standardization (ISO) 27001: A.12.4.1 (Event Logging) and A.7.2.2 (Security Awareness)

Scenario #2: Enhanced malware versus malware analysis

AI has upped the stakes when it comes to malware. Threat actors use AI to generate error-free code that can bypass signature-based detection. Defenders respond by employing AI for behavior analysis and anomaly detection across endpoints and networks. It has become a continuous cycle of action and counteraction between threat actors and defenders.

Relevant controls:

  • NIST 800-53: SI-3 (Malicious Code Protection), IR-4 (Incident Handling)
  • ISO 27001: A.12.2.1 (Malware Protection)

Scenario #3: Deepfakes for fraud versus deepfake detection

Threat actors can now produce deepfake voice or video impersonations that are highly realistic and difficult to detect. Even more troubling is the fact that only a small sample of a person’s voice or video is needed to create a deepfake that captures all the intricacies of their pronunciation and voice pattern.

Organizations can combat this by utilizing AI models to analyze images, identifying minor inconsistencies in audio or video metadata and artifacts in speech or facial movements. This might include shadows in the wrong place, skin that looks too tight or doesn’t reflect equal tightness or looseness, or misaligned eyes.

Relevant controls:

  • NIST 800-53: IA-2 (Identification and Authentication), AC-2 (Account Management)
  • ISO 27001: A.9.2.1 (User Registration), A.13.2.1 (Information Transfer Policies)

Scenario #4: Password cracking versus authentication protection

Threat actors use AI’s predictive capabilities to accelerate brute-force attacks by predicting common patterns in user-generated passwords, enabling them to generate millions of attempts.

Defenders may be well-positioned to use AI to identify risky authentication behaviors and enforce adaptive multi-factor authentication measures, such as disabling an account after three failed login attempts. Other controls, such as geolocation or abnormal hours, help AI models recognize if someone is trying to log into an account from a different location or outside of the time that a person usually logs in.

Relevant controls:

  • NIST 800-53: IA-5 (Authenticator Management), AC-7 (Unsuccessful Login Attempts)
  • ISO 27001: A.9.4.2 (Secure Log-On Procedures)

Scenario #5: Social engineering versus user behavior monitoring

Threat actors use LLMs to generate personalized social engineering messages based on scraped personal data, building detailed profiles from a variety of social media activities.

Defenders rely on behavior analytics to detect or identify out-of-pattern requests and abnormal social engineering attempts that deviate from a user’s normal cadence, such as responding to Facebook friend requests when that’s not typical behavior for the user.

Relevant controls:

  • NIST 800-53: AU-6 (Audit Review), CA-7 (Continuous Monitoring)
  • ISO 27001: A.12.4.3 (Administrator and Operator Logs), A.6.1.2 (Segregation of Duties)

Scenario #6: Zero-day discovery versus assisted threat hunting

Al enables threat actors to rapidly scan codebases and binaries for zero-day vulnerabilities, with the potential to package the exploits for sale. Depending on the software package, it could be valued at millions of dollars.

Defenders proactively use AI to sift through millions of lines of code to find vulnerabilities and proactively identify suspicious patterns indicative of unknown exploits. Defenders should conduct yearly penetration testing and bi-annual vulnerability scanning.

Relevant controls:

  • NIST 800-53: RA-5 (Vulnerability Scanning), SI-2 (Flaw Remediation)
  • ISO 27001: A.12.6.1 (Technical Vulnerability Management)

View a demo

The future of cybersecurity with AI

AI is accelerating the scale and sophistication of cyberattacks, providing threat actors with powerful tools to exploit vulnerabilities, while defenders are often left reactive, implementing their governance after the fact. To be successful in protecting against cyberattacks, organizations must enhance their speed, skill, and context, maintaining agility and robust governance structures. It's important to remember that in the cybersecurity world, threat actors only need to succeed once, while defenders must be vigilant and correct every time.

Commonly asked questions about AI cybersecurity risks and internal controls

What are the risks of AI?

Two of the top AI-driven cybersecurity risks around internal audit controls are:

  1. 1. AI-powered evasion of internal controls:
    • Generative AI and machine learning can be used by threat actors to craft highly realistic and adaptive attack vectors that bypass traditional internal controls. These include:
      • Deepfake communications (e.g., voice or video from executives requesting urgent fund transfers).
      • Context-aware phishing emails are trained on public and leaked company data.
      • Synthetic identities and fake documents that defeat onboarding or verification processes.
  2. Compromise or manipulation of AI-enabled internal control systems:
    • As more internal control systems (e.g., fraud detection, audit analytics, risk scoring engines) incorporate AI, the integrity of the AI models becomes a new attack surface. Risks include:
      • Data poisoning: Inserting false data into training sets to mislead the model.
      • Model drift: Gradual degradation of accuracy due to changes in input behavior or threat landscape.
      • Shadow AI: Employees using unauthorized AI tools (e.g., ChatGPT or AutoML platforms) to automate decisions without governance.

What is the main challenge of using AI in cybersecurity?

From an internal audit perspective, the lack of transparency and explainability (AI "Black Box" Problem) is the primary challenge. Internal auditors are responsible for evaluating the design and operating effectiveness of controls. When AI is used in cybersecurity for threat detection, risk scoring, access decisions, or anomaly detection, it often operates through complex machine learning models that:

  • Lacks clear, auditable logic (unlike rule-based controls).
  • Produce outputs without an interpretable rationale.
  • Evolve dynamically based on new data (model drift).

This makes it difficult for auditors to:

  • Determine whether decisions made by the AI are accurate, fair, or biased.
  • Verify whether AI behavior aligns with policy or compliance frameworks.
  • Ensure that controls embedded in AI systems are both effective and testable.

How can AI be used in risk management?

AI can significantly enhance cybersecurity risk management by enabling proactive detection, quantification, and response to threats and vulnerabilities. AI can be applied to the following areas within the cybersecurity risk management lifecycle:

  • Threat and vulnerability identification
  • Risk quantification and prioritization
  • Continuous monitoring and anomaly detection
  • Automated risk response
  • Third-party and supply chain risk monitoring

Can AI be used to combat threats?

AI can be a powerful tool for strengthening and defending internal controls when implemented with those goals in mind. Its ability to process large volumes of data, detect patterns, and continuously learn makes it particularly effective in identifying and mitigating threats that might otherwise go unnoticed. 

What are the most common AI cybersecurity risks facing internal controls?

The most common AI-related cybersecurity risks facing internal controls are a result of both malicious uses of AI and poor governance over AI-enabled systems. These risks directly impact the integrity, reliability, and auditability of internal control environments.

How can AI be used to strengthen phishing detection and response?

Phishing remains the single most exploited vector for initial compromise, and AI has become a game-changer for both detecting and responding to these threats. Here is a sample of where it can be used:

  • AI-Enhanced email filtering and detection
  • User behavior and anomaly detection
  • Automated response

What frameworks help map AI threats to internal controls?

While there isn’t a single framework designed exclusively for AI threats yet, several established control and risk frameworks can be combined to map AI-specific risks (bias, model drift, data poisoning, prompt injection, unauthorized model use, etc.) to internal controls in a structured and defensible way. Here are the most popular frameworks for accomplishing this at this time:

How do deepfakes impact fraud risk and identity verification?

Deepfakes pose a significant and growing threat to fraud risk and identity verification processes, especially in cybersecurity, financial services, and internal control environments by:

  • Impersonating executives or key personnel
  • Bypassing biometric identity verification
  • Erosion of trust in video-based processes
  • Enabling Deepfake-as-a-Service for fraud operations

Can AI help auditors detect and respond to zero-day threats?

AI can be a powerful ally for auditors—particularly internal or IT auditors—in detecting and responding to zero-day threats by enhancing visibility, analysis, and automation. Here are a few of the ways:

  • Detection of Zero-Day threats through anomaly-based AI
  • Accelerating threat investigation and triage
  • Control monitoring, and resilience
  • Proactive threat intelligence
  • Automation of audit trail analysis for forensics

What role do audit and risk professionals play in AI governance?

Audit and risk professionals are becoming the guardians of trustworthy, accountable, and well-controlled AI adoption within organizations. Their role is not to build or operate AI systems, but to govern and assure them — ensuring that AI is used ethically, securely, and in alignment with organizational objectives and regulatory expectations.

How can organizations use AI to combat AI risks effectively?

This is what is known as “fighting fire with fire” in the governance and security landscape.

AI introduces new risks (bias, model manipulation, data leakage, misinformation, etc.), but it also provides powerful defensive capabilities when applied intelligently. Organizations can use AI itself to combat AI-driven risks through automated detection, continuous monitoring, adaptive controls, and explainability.

Subscribe below to receive monthly Expert Insights in your inbox

Missing the form below?

To see the form, you will need to change your cookie settings. Click the button below to update your preferences to accept all cookies. For more information, please review our Privacy & Cookie Notice.

Michael Trpkosh
Digital Security Managing Director, Crowe LLP
Michael Trpkosh is a cybersecurity expert and former Chief Information Security Officer (CISO), currently serving as a Digital Security Managing Director for Crowe LLP, a public accounting and technology firm.
Back To Top