Understanding AI and its role in cybersecurity

Artificial intelligence (AI) is the hot topic of the moment, with the latest advancements in AI technology receiving widespread media attention. Among the industries poised to benefit the most, or potentially be impacted the hardest, is cybersecurity. Contrary to popular belief, some professionals in the field have been using AI in various forms for over two decades. However, the power of cloud computing and advanced algorithms are now combining to further enhance digital defenses and foster the development of a new generation of AI-based applications, which could revolutionize how organizations protect, detect, and respond to cyber threats.

On the flip side, as these capabilities become more affordable and accessible, malicious actors will also exploit the technology in social engineering, disinformation, scams, and more. A recent white paper from ESET delves into the risks and opportunities this presents for cyber defenders.

A Brief History of AI in Cybersecurity

Large language models (LLMs) might be why boardrooms around the world are abuzz with talk of AI, but the technology has been utilized in various ways for years. For instance, ESET first deployed AI over a quarter of a century ago through neural networks to improve the detection of macro viruses. Since then, AI has been used in numerous ways, such as:

  • Differentiating between malicious and clean code samples
  • Rapidly triaging, sorting, and labeling malware samples en masse
  • Implementing a cloud reputation system with continuous learning via training data
  • Enhancing endpoint protection with high detection and low false-positive rates, using neural networks, decision trees, and other algorithms
  • Developing a robust cloud sandbox tool powered by multilayered machine learning detection, unpacking, scanning, experimental detection, and deep behavior analysis
  • Innovating new cloud and endpoint protection powered by transformer AI models
  • Deploying XDR that prioritizes threats by correlating, triaging, and grouping large volumes of events

Why Security Teams Use AI

Today, security teams need effective AI-based tools more than ever, driven by three main factors:

  1. Skill Shortages There is a global shortfall of approximately four million cybersecurity professionals, including 348,000 in Europe and 522,000 in North America. Organizations need tools to enhance the productivity of their current staff and provide guidance on threat analysis and remediation in the absence of senior colleagues. Unlike human teams, AI can operate 24/7/365 and identify patterns that might be missed by security professionals.
  2. Agile, Determined, and Well-Resourced Threat Actors As cybersecurity teams struggle with recruitment, their adversaries are growing stronger. The cybercrime economy could cost the world up to $10.5 trillion annually by 2025. Aspiring threat actors can access everything they need to launch attacks through ready-made “as-a-service” offerings and toolkits. Third-party brokers offer access to pre-breached organizations, and nation-state actors like North Korea and China are increasingly involved in financially motivated attacks. In places like Russia, the government is suspected of encouraging anti-West hacktivism.
  3. Higher Stakes As digital investments have increased, so has the reliance on IT systems for sustainable growth and competitive advantage. Network defenders understand that failing to prevent or quickly detect and contain cyber threats could result in significant financial and reputational damage. Today, the average cost of a data breach is $4.45 million, but a severe ransomware breach involving service disruption and data theft could be much higher. Financial institutions alone have lost an estimated $32 billion in downtime due to service disruption since 2018.

How AI Is Used by Security Teams

It’s no surprise that organizations are leveraging AI to improve their ability to prevent, detect, and respond to cyber threats. Here’s how they’re doing it:

  • Correlating indicators in large volumes of data to identify attacks
  • Identifying malicious code through abnormal activity
  • Assisting threat analysts by interpreting complex information and prioritizing alerts

Current and near-future uses of AI include:

  • Threat Intelligence: LLM-powered GenAI assistants simplify complex technical reports, summarizing key points and actionable insights in plain language.
  • AI Assistants: Embedding AI “copilots” in IT systems helps eliminate dangerous misconfigurations that could expose organizations to attacks. This applies to both general IT systems like cloud platforms and security tools like firewalls, which require complex settings.
  • Supercharging SOC Productivity: Security Operations Center (SOC) analysts face immense pressure to quickly detect, respond to, and contain threats. The large attack surface and the volume of tools generating alerts can be overwhelming, causing legitimate threats to go unnoticed while analysts deal with false positives. AI can reduce this burden by contextualizing and prioritizing alerts and potentially resolving minor ones.
  • New Detections: Combining indicators of compromise (IoCs) with publicly available information and threat feeds, AI tools can scan for the latest threats.

How AI Is Used in Cyberattacks

Unfortunately, malicious actors are also eyeing AI. According to the UK’s National Cyber Security Centre (NCSC), AI will “heighten the global ransomware threat” and “almost certainly increase the volume and impact of cyber-attacks in the next two years.” Current uses of AI by threat actors include:

  • Social Engineering: GenAI helps craft highly convincing and grammatically correct phishing campaigns at scale.
  • BEC and Other Scams: GenAI can mimic the writing style of specific individuals or corporate personas, tricking victims into transferring money or providing sensitive information. Deepfake audio and video could also be used for similar purposes. The FBI has issued multiple warnings about this.
  • Disinformation: GenAI facilitates content creation for influence operations. Reports indicate that Russia is already using such tactics, which could be widely replicated if successful.

The Limits of AI

Despite its potential, AI has its limitations. High false positive rates and the need for high-quality training sets can limit its effectiveness. Human oversight is often required to verify output and train the models. AI is not a silver bullet for either attackers or defenders.

In the future, AI tools could face off against each other, with one side trying to breach defenses and deceive employees while the other looks for signs of malicious AI activity. This marks the beginning of a new arms race in cybersecurity.

Recommended: Bitdefender Free Trial 180 days

Leave a reply:

Your email address will not be published.