Top cybersecurity risks and trends
Cybersecurity threats are becoming increasingly sophisticated, driven by technological advancements and the innovative tactics of cybercriminals. As organizations strive to protect themselves against threat actors, understanding the latest cybersecurity risks and trends is crucial.
AI-powered threat automation
Integrating AI into cybercrime has significantly transformed the speed and scale at which threat actors can now scan for vulnerabilities. According to Fortinet, a cybersecurity and network security company, cybercriminals launched
36,000 automated scans per second in 2024, a 16.7 percent increase from the previous year. This capability allows attackers to rapidly scan thousands of websites for vulnerabilities, execute widespread credential attacks, and craft sophisticated malware at machine speed, resulting in a surge in stolen credentials.
Advanced Ransomware-as-a-Service (RaaS)
Ransomware-as-a-Service has emerged as a
cybercrime business model, where ransomware operators sell ransomware code or malware to other hackers, who then use the code to initiate their own cyberattacks. Combined with AI-enhanced social engineering, operators can create ransomware that evades detection and optimizes extortion strategies.
AI-driven supply chain attacks
Supply chain attacks are becoming increasingly mainstream as adversaries exploit vulnerabilities through third-party relationships, often using phishing, smishing (a type of cyberattack that uses deceptive text messages to trick you into revealing personal or financial information or vishing (a type of voice phishing scam where criminals use phone calls to deceive people into giving up personal information). These sophisticated attacks target service providers, vendors, and help desk systems to infiltrate larger networks. A notable example involved a large airline organization, where AI-powered vishing compromised an offshore call center, impacting up to six million customers. As AI technology becomes more advanced, cybercriminals are making their attacks more difficult to detect by creating AI agents with local accents and dialects and reducing the lag time between responses.
Zero trust goes mainstream with AI
Driven by the rise of remote work, organizations are rapidly adopting both zero-trust architectures and mindsets for robust security measures. The zero-trust approach operates on the principle of “don’t trust anything” and requires continuous authentication, least privilege access, and micro-segmentation to counter internal and external threats.
Deepfake and AI-based social engineering
Threat actors can now use GenAI to create hyper-realistic deepfakes that blur the lines between real and fraudulent communication. They can easily replicate a person’s voice or video likeness, producing a limitless set of exploitative scenarios across social media, email, political campaigns, and news videos.
Real-world scenarios of AI attacks and defenses mapped to internal controls
As AI continues to evolve, both new threats and innovative defenses are emerging. Organizations must stay ahead of these developments to protect their data and maintain a secure environment. Below are a variety of real-world scenarios that demonstrate how AI is utilized by both threat actors and defenders, highlighting the relevant controls.
Scenario #1: Crafted phishing emails versus enhanced detection
Threat actors can now leverage large language models (LLMs) to craft phishing emails that look highly convincing and are designed to bypass traditional spam filters. LLMs can understand and generate human-like text by learning patterns in data. Previously, it was easier to flag malicious emails because there was a lack of mastery of the English language (or the predominant language) of the area that the hackers were targeting. Now, LLMs ensure phishing emails are perfectly crafted and grammatically correct.
Organizations can counteract this by deploying AI-powered tools to detect anomalies in metadata, sender patterns, and tone to flag malicious emails. Using AI algorithms, organizations can create scores that represent the statistical probability that an email is fake, as well as determine what they want their score to be, enabling them to find the right balance between risk management and opportunity preservation.
Relevant controls:
- National Institute of Standards and Technology (NIST) 800-53: SI-4 (System Monitoring) and AT-2 (Security Awareness Training). Learn more about the NIST AI Risk Management Framework here.
- International Organization for Standardization (ISO) 27001: A.12.4.1 (Event Logging) and A.7.2.2 (Security Awareness)
Scenario #2: Enhanced malware versus malware analysis
AI has upped the stakes when it comes to malware. Threat actors use AI to generate error-free code that can bypass signature-based detection. Defenders respond by employing AI for behavior analysis and anomaly detection across endpoints and networks. It has become a continuous cycle of action and counteraction between threat actors and defenders.
Relevant controls:
- NIST 800-53: SI-3 (Malicious Code Protection), IR-4 (Incident Handling)
- ISO 27001: A.12.2.1 (Malware Protection)
Scenario #3: Deepfakes for fraud versus deepfake detection
Threat actors can now produce deepfake voice or video impersonations that are highly realistic and difficult to detect. Even more troubling is the fact that only a small sample of a person’s voice or video is needed to create a deepfake that captures all the intricacies of their pronunciation and voice pattern.
Organizations can combat this by utilizing AI models to analyze images, identifying minor inconsistencies in audio or video metadata and artifacts in speech or facial movements. This might include shadows in the wrong place, skin that looks too tight or doesn’t reflect equal tightness or looseness, or misaligned eyes.
Relevant controls:
- NIST 800-53: IA-2 (Identification and Authentication), AC-2 (Account Management)
- ISO 27001: A.9.2.1 (User Registration), A.13.2.1 (Information Transfer Policies)
Scenario #4: Password cracking versus authentication protection
Threat actors use AI’s predictive capabilities to accelerate brute-force attacks by predicting common patterns in user-generated passwords, enabling them to generate millions of attempts.
Defenders may be well-positioned to use AI to identify risky authentication behaviors and enforce adaptive multi-factor authentication measures, such as disabling an account after three failed login attempts. Other controls, such as geolocation or abnormal hours, help AI models recognize if someone is trying to log into an account from a different location or outside of the time that a person usually logs in.
Relevant controls:
- NIST 800-53: IA-5 (Authenticator Management), AC-7 (Unsuccessful Login Attempts)
- ISO 27001: A.9.4.2 (Secure Log-On Procedures)
Scenario #5: Social engineering versus user behavior monitoring
Threat actors use LLMs to generate personalized social engineering messages based on scraped personal data, building detailed profiles from a variety of social media activities.
Defenders rely on behavior analytics to detect or identify out-of-pattern requests and abnormal social engineering attempts that deviate from a user’s normal cadence, such as responding to Facebook friend requests when that’s not typical behavior for the user.
Relevant controls:
- NIST 800-53: AU-6 (Audit Review), CA-7 (Continuous Monitoring)
- ISO 27001: A.12.4.3 (Administrator and Operator Logs), A.6.1.2 (Segregation of Duties)
Scenario #6: Zero-day discovery versus assisted threat hunting
Al enables threat actors to rapidly scan codebases and binaries for zero-day vulnerabilities, with the potential to package the exploits for sale. Depending on the software package, it could be valued at millions of dollars.
Defenders proactively use AI to sift through millions of lines of code to find vulnerabilities and proactively identify suspicious patterns indicative of unknown exploits. Defenders should conduct yearly penetration testing and bi-annual vulnerability scanning.
Relevant controls:
- NIST 800-53: RA-5 (Vulnerability Scanning), SI-2 (Flaw Remediation)
- ISO 27001: A.12.6.1 (Technical Vulnerability Management)