Generative AI is Reshaping Cybersecurity – Threats and Opportunities
Security may be the biggest obstacle to AI adoption in the enterprise. Businesses want to protect their intellectual property, ensure privacy of customer data, and safeguard AI systems from misuse.
Today, we’re sharing our perspective on how GenAI empowers both attackers and defenders. We’ll also highlight how this new technology must be protected for enterprises to adopt GenAI at a meaningful scale.
GenAI Alters the Threat Landscape
The rise of Generative AI (GenAI) has created a tidal wave of innovation and automation, with one-third of enterprises already experimenting with GenAI. As a new technology, GenAI unlocks powerful use cases related to data synthesis and content generation. Pairing these models with natural language interfaces and intelligent agents offers the potential for substantial productivity gains – albeit with a marked increase in risk.
For attackers, AI represents a step-function improvement for reconnaissance, exploitation, and evasion. A malicious actor can utilize AI to generate sophisticated attacks, regardless of their skill level. In addition, we can expect advanced actors to introduce novel methods of attack, such as those that seek to compromise the AI infrastructure itself.
We are already seeing increased activity from a few threat vectors:
- Social Engineering, Disinformation, and Fraud – creating targeted phishing emails (e.g., FraudGPT), convincing deep fakes, and dangerous disinformation campaigns.
- Automated Discovery and Exploitation – deploying intelligent agents to identify, test, and/or compromise targets through a multi-step process.
- Mutating Malware – ever-evolving code and behaviors that are designed to avoid detection from security solutions (e.g., BlackMamba).
- AI Infrastructure Attacks – data poisoning, prompt injections, and model attacks that influence or alter AI outputs, reveal intellectual property, or result in a denial of service.
Improving the Security Stack
For defenders, AI will become the only way to keep up with adversaries in this new landscape. While AI expands the attack surface, it also empowers defenders to proactively detect threats, mitigate vulnerabilities, and automate security tasks. A wider set of individuals will thus be able to contribute to security efforts, helping to close the cyber skills gap.
We foresee significant benefits for security teams in the following areas:
- Application Security and Software Development – companions that generate and validate secure code in real-time, ultimately providing continuous security throughout the SDLC.
- Vulnerability Assessment and Penetration Testing – comprehensive discovery, risk assessment, and suggested resolution for vulnerabilities across the technology stack.
- Automated Security Operations – leveraging AI to detect complex threats, correlate data across multiple sources, visualize the attack chain, and neutralize the threat. This area may see the most benefit from natural language interfaces and the rise of security co-pilots.
- Data Classification and Policy Generation – conducting large-scale data discovery and classification, as well as generating tailored rules, authorization policies, and synthetic data.
Trusted AI – The Most Pressing Opportunity
One of the key barriers to AI adoption today is trust, which requires protection of the models and training data, as well as safe use of the resulting applications and processes. These threats must be addressed before the widespread adoption of GenAI can occur.
For example, JPMorgan Chase “will not roll out generative AI until we can mitigate all of the risks” says Larry Feinsmith, Head of Global Technology Strategy, citing misuse and “having the right cyber capabilities so that the models aren’t poisoned or tampered.”
The White House is working with the major foundational model providers on policies and commitments for secure and trustworthy AI. Yet, this technology is evolving rapidly, across AI providers large and small, and enterprises need to manage risks to their business in real-time.
There are three core components that must be secured for trusted AI, and we are already seeing an initial wave of startups in each of these areas:
- Data – Security practitioners have long been concerned with preventing data leakage, ensuring data privacy, and enforcing access control. What changes with AI is the meteoric rise in accessible technologies such as ChatGPT, the increased importance of training data (internal and external), and the difficulty of authorization policies with LLM usage. Emerging solutions in this space include companies such as Tripleblind, DynamoFL, and Prompt Security.
- Model Development – Analogous to what we see with traditional software development, security is required throughout model development. This includes visibility of the model supply chain, vulnerability assessments and testing, and robustness of the resulting model. Several companies are building in this area, including ProtectAI, CalypsoAI, and Lakera.
- Run-Time – Once a model is in production, there’s a need for controlled behavior, guardrails against abuse, and securing the model from attackers. In addition, enterprises must track down “shadow AI” usage and ensure overall risk governance of AI. Companies solving for these issues include Robust Intelligence, HiddenLayer, CredoAI, and Cranium.
The Early Innings
GenAI technologies will eventually become the first choice for many use cases throughout security, privacy, and risk governance. For now, limitations remain, such as the clear reasoning behind a model’s decision, the repeatability of an output, and the overall integration into a practitioner’s workflow.
March Capital is incredibly excited about the potential for GenAI across detection, analysis, and response automation. If you’re building in these areas, or see the landscape playing out differently, we would love to hear from you.