AI in Cybersecurity: How It's Changing Security

exodata.io
Security |AI & Automation |Azure |Cloud |Cost Optimization |Data & Analytics

Published on: 14 June 2024

The cybersecurity industry spent decades building defenses around signatures — known patterns of known threats. That model started breaking down years ago as attackers moved faster than signature databases could update, and it has now collapsed entirely under the volume of novel threats generated by AI-equipped adversaries. The response has been predictable: defenders are adopting AI too. But the reality of AI in cybersecurity is more nuanced than vendor marketing suggests. Some applications are genuinely transformative. Others are repackaged pattern matching with a machine learning label. And the adversarial side — AI in the hands of attackers — is creating threat categories that did not exist three years ago.

AI-Powered Threat Detection: What Actually Works

The most mature application of AI in cybersecurity is anomaly detection — identifying behavior that deviates from established baselines. This is where AI genuinely outperforms traditional approaches, because the problem is fundamentally about pattern recognition at scale.

Darktrace: Self-Learning AI for Network Behavior

Darktrace pioneered the concept of an “Enterprise Immune System” that learns what normal looks like for every user, device, and network segment in your environment, then flags deviations. Unlike signature-based tools that need prior knowledge of a threat, Darktrace can identify novel attacks based purely on behavioral anomaly.

A practical example: an employee’s workstation starts making lateral DNS queries to internal hosts it has never contacted before, at 2 AM, from an IP that normally goes idle after 6 PM. No malware signature would catch this — but the behavioral deviation is obvious to Darktrace’s models. It can autonomously respond by restricting that device’s network access while alerting the SOC.

Darktrace is particularly effective in environments with complex, heterogeneous networks where defining “normal” manually would be impractical. The tradeoff is tuning time — the system needs two to four weeks of baseline learning, and early deployments generate false positives until the models stabilize.

CrowdStrike Falcon: AI at the Endpoint

CrowdStrike Falcon’s approach is different. Rather than monitoring network traffic, Falcon deploys a lightweight agent on endpoints that uses machine learning models to classify processes in real time. The models are trained on billions of security events from CrowdStrike’s customer base (their Threat Graph), which gives them a significant data advantage.

Falcon’s AI models can identify malicious processes even when the specific malware variant has never been seen before — what CrowdStrike calls “next-gen antivirus.” The models analyze process behavior, file attributes, and execution patterns rather than matching file hashes. According to CrowdStrike’s published testing data, their AI models catch 99% of malware with false positive rates below 1%, though independent testing by organizations like AV-TEST and SE Labs shows somewhat more modest (but still strong) numbers.

Microsoft Sentinel: Cloud-Native SIEM with AI

Microsoft Sentinel combines traditional SIEM capabilities with machine learning-driven analytics. Its “Fusion” engine correlates alerts across multiple data sources — Azure AD sign-in logs, Office 365 activity, firewall logs, endpoint detection — to identify multi-stage attacks that would appear as unrelated events individually.

For example, Fusion might connect an impossible-travel sign-in from Azure AD, a suspicious mailbox forwarding rule in Exchange, and an anomalous file download from SharePoint into a single incident, recognizing the pattern as a compromised account being used for data exfiltration. A human analyst reviewing each alert separately might not connect them for hours.

Automated Incident Response: SOAR Platforms

Detecting threats faster is only half the equation. The other half is responding before the attacker achieves their objective. Security Orchestration, Automation, and Response (SOAR) platforms use AI-assisted playbooks to automate response actions that previously required manual analyst intervention.

How SOAR Works in Practice

Splunk SOAR (formerly Phantom) and Palo Alto XSOAR (formerly Demisto) are the two dominant SOAR platforms. They work by connecting to your security stack via APIs and executing predefined playbooks when specific conditions are met.

A typical automated playbook:

  1. SIEM generates an alert: “Phishing email detected containing a malicious URL.”
  2. SOAR enriches the alert by querying threat intelligence feeds (VirusTotal, AlienVault OTX, internal threat lists) for information about the URL and sender.
  3. If the URL is confirmed malicious, SOAR automatically quarantines the email across all recipient mailboxes.
  4. SOAR checks email logs to identify every user who clicked the link before quarantine.
  5. For users who clicked, SOAR triggers an endpoint scan, forces a password reset, and revokes active sessions.
  6. SOAR generates an incident report and assigns it to a Tier 2 analyst for review.

That entire sequence can execute in under 60 seconds. A human analyst performing the same steps manually would take 30 to 45 minutes — and during that delay, the attacker has time to establish persistence.

The Numbers Behind Automation

IBM’s 2024 Cost of a Data Breach Report found that organizations with fully deployed security AI and automation experienced breaches that cost an average of $1.76 million less than organizations without these capabilities. The average breach cost was $4.88 million globally, making that savings significant. Organizations with AI and automation also identified and contained breaches 108 days faster — 214 days versus 322 days for those without.

Those numbers reflect not just faster detection but faster containment. When a SOAR playbook can isolate a compromised endpoint within seconds of detection, the blast radius of an attack shrinks dramatically.

AI in Vulnerability Management

Traditional vulnerability management involves running periodic scans, generating reports with thousands of CVEs, and hoping the patching team prioritizes correctly. AI is improving this process in two key ways.

Risk-Based Prioritization

Not every critical CVE matters equally to every organization. A critical vulnerability in Apache Struts is irrelevant if you do not run Struts. Tools like Tenable.io and Qualys VMDR use machine learning to score vulnerabilities based on your specific environment — prioritizing beyond raw CVSS scores — considering factors like whether the vulnerable asset is internet-facing, whether exploit code is publicly available, and whether the vulnerability is being actively exploited in the wild.

This approach reduces the actionable vulnerability list from thousands to dozens, letting patching teams focus on the exposures that actually represent risk rather than chasing CVSS scores in a vacuum.

Predictive Vulnerability Analysis

Some AI models attempt to predict which newly disclosed vulnerabilities are most likely to be exploited before exploit code appears. Kenna Security (now part of Cisco) built models that analyze vulnerability characteristics, vendor response patterns, and dark web chatter to predict exploitation likelihood. Their published research shows these models significantly outperform CVSS scores alone at identifying which vulnerabilities will actually be weaponized.

Adversarial AI: The Threats AI Creates

AI is not just a defensive tool. Attackers are using the same technology to create threats that are harder to detect, more convincing, and scalable in ways that were not possible before.

Deepfake Phishing and Voice Cloning

Traditional phishing relies on email and fake websites. AI-generated deepfakes add a new dimension: voice calls and video conferences where the attacker impersonates a known person. In 2024, a finance worker at a Hong Kong multinational transferred $25 million after attending a video conference where every other participant — including the company’s CFO — was a deepfake.

Voice cloning is even more accessible. Services can clone a voice from a few seconds of audio. An attacker who obtains a brief voicemail from a CEO can generate convincing phone calls to the finance department requesting urgent wire transfers. Traditional phishing awareness training does not prepare employees for a phone call that sounds exactly like their boss.

AI-Generated Malware and Polymorphic Code

Large language models can generate functional malware code, and more critically, they can modify existing malware to evade signature-based detection. BlackMamba, a proof-of-concept demonstrated by HYAS researchers, uses a large language model to dynamically generate its payload at runtime, creating a new variant with each execution. Traditional antivirus that relies on file hashes or static signatures has zero chance of detecting malware that literally rewrites itself every time it runs.

Beyond complete malware generation, attackers use AI to automate evasion techniques: obfuscating code, generating realistic decoy traffic, and creating convincing social engineering pretexts at scale. An attacker who previously sent the same phishing email to 10,000 people can now generate 10,000 individually personalized emails, each referencing the target’s actual job title, recent projects, and professional contacts scraped from LinkedIn.

Prompt Injection and LLM Exploitation

As organizations deploy AI assistants and LLM-powered tools internally, a new attack surface emerges: prompt injection. An attacker who can influence the input to an LLM — through a poisoned document, a crafted email, or a manipulated web page — can potentially override the LLM’s instructions and extract sensitive data, trigger unauthorized actions, or compromise the AI system itself.

This is not hypothetical. Researchers have demonstrated prompt injection attacks against AI-powered email assistants that can exfiltrate inbox contents, against customer service chatbots that can be tricked into revealing backend system details, and against code generation tools that can be manipulated into producing vulnerable code.

Human-AI Collaboration in the SOC

The most effective security operations centers are not replacing analysts with AI — they are restructuring around AI as a force multiplier. The model that works looks like this:

AI handles: Alert triage and enrichment, false positive filtering, initial correlation, automated response for known-good playbooks, baseline behavior modeling, and continuous monitoring across all data sources simultaneously.

Humans handle: Novel attack investigation, threat hunting based on intelligence and intuition, playbook design and tuning, strategic security decisions, communication with stakeholders, and judgment calls where context matters more than pattern matching.

The result: A Tier 1 analyst who previously spent 80% of their time on false positive triage can focus on the 20% of alerts that actually require human judgment. SOC teams do not shrink — they become more effective. IBM’s data shows that organizations using AI in their SOC identify breaches faster, contain them faster, and spend less on remediation.

The key mistake organizations make is treating AI as a replacement for security staff rather than as infrastructure that makes existing staff more capable. An AI system with no one to tune its models, investigate its findings, and adapt its playbooks will degrade over time. The human element is not optional — it is what makes AI in cybersecurity actually work.

Preparing Your Security Posture for the AI Era

AI is already embedded in the tools attackers use. The question for every organization is whether their defensive capabilities are keeping pace. That means evaluating whether your current security stack includes AI-powered detection, whether your incident response can execute at machine speed, and whether your team is prepared for AI-enhanced social engineering attacks.

If you are evaluating AI-powered security tools, building out SOAR playbooks, or need to assess your SOC’s readiness for AI-driven threats, Exodata can help. Our security team works with organizations across industries to implement detection and response capabilities that match the current threat environment — not the one from five years ago. Talk to our team about a security assessment.