By: Josh Berry
It’s not news that artificial intelligence (AI) is driving innovation across industries. However, what is emerging, according to Pellera Technologies’ Global CISO Sean Colicchio, is that “AI is introducing whole new attack surfaces that weren’t even on the radar before.”
It’s true. Attackers are now targeting not only conventional software flaws but also the unpredictability of AI-driven tools. AI integration adds complexity to application code by expanding the attack surface, providing new avenues for threat actors to exploit.
In this article, we’ll explore the new application security challenges that AI introduces and outline proactive strategies to help protect your organization.
The Expanding Attack Surface of AI Applications
Internet crime losses exceeded $16 billion last year—a 33% increase in losses from 2023, according to the FBI. While AI is transforming applications, it’s also multiplying the risks. Let’s dive into why this is happening and common flags to look out for.
Why AI Makes Applications More Complex
Integrating an AI model into your application adds functionality, but also introduces new libraries, Application Programming Interfaces (APIs), and dependencies that can be exploited if not properly secured. Each component, whether it’s a machine learning model, third-party library, or data pipeline, represents another way in for attackers.
Furthermore, many development teams are deploying AI-driven application code at record speed to meet pressing business demands. While this accelerates delivery, it often pushes app security considerations to the sidelines. I often advise clients that there’s a lot of insecure examples of code on the internet that are explicitly vulnerable, and the model your AI leverages could very well have been trained on that.
This mix of new entrance points, rapid development, and reliance on AI-generated code makes applications harder to manage and more susceptible to cyberattacks.
Common Vulnerabilities in AI App Code
The very tools designed to accelerate AI app code development can, if left unchecked, open dangerous gaps in systems. A few weaknesses to monitor for include:
Hardcoded secrets in AI pipelines
Developers may inadvertently embed sensitive information like API keys, database credentials, or private tokens directly into the AI application code or training data for the sake of convenience or speed. If this code or data is exposed, such as through a public repository or data breach, it can provide hackers with direct access to systems.
Poorly secured APIs for AI model access
APIs are the bridges that connect applications to AI models. Weak API security, like poor authentication or access controls, allows attackers to bypass security measures, interact with the model directly, or even exfiltrate confidential data.
Supply chain risks from open-source ML libraries
AI applications rely heavily on open-source libraries (e.g., TensorFlow, PyTorch) to improve efficiency and lower costs. But these libraries, often maintained by a large community, can contain vulnerabilities. A single compromised open-source package can introduce security issues into the entire application, which can be difficult to detect as they’re often buried deep within the library’s code.
Prompt injection attacks
AI applications that rely on large language models can be tricked into bypassing their own safety controls through prompt injection. In this type of attack, malicious instructions are embedded in user input that cause the model to perform actions it shouldn’t, disclose sensitive data, or expose hidden system prompts. Without the proper guardrails, these injections can compromise confidentiality and integrity in ways that traditional application security practices don’t account for.
>> Related Read – Why Compliance Is Starting to Require Continuous Penetration Testing
From Models to Exploits: How Attackers Target AI
A Darktrace survey found 90% of cybersecurity professionals believe AI-powered threats will have a significant impact over the next few years. This is because as AI adoption grows, cybercriminals are adapting their methods to exploit both the models and the applications that depend on them—widening the attack surface.
Prompt Injection and Data Poisoning
Attackers can control a model by crafting malicious input, or “prompts.” This is essentially a new form of SQL injection, where the user’s input is manipulated to influence the system’s output. A hacker can use this to bypass safety filters, make the model perform unintended tasks, or leak confidential information.
Data poisoning takes this a step further by corrupting the model’s training data. An attacker can add fabricated data during the training phase, teaching the model to behave in a way that creates a backdoor. This causes the model to misclassify data, provide incorrect information, or even execute harmful code.
Model Supply Chain Attacks
AI models, much like software, have a supply chain. Attackers can compromise this chain by injecting destructive code or logic into various components.
- Malicious Packages: Threat actors can create and upload seemingly innocent but harmful open-source packages to public repositories. When a developer uses this package, the security risk is launched into their application.
- Compromised Pre-trained Models: Many teams use pre-training models to save time. However, if the model is sourced from an untrusted repository, it could contain a backdoor or be interfered with to leak data.
- Tampered Dependencies: A cybercriminal can compromise a legitimate library that an AI model depends on, creating a domino effect that introduces security holes throughout the application.
Application-Layer Exploits
While new AI-specific attacks are emerging, attackers haven’t forgotten about traditional application-layer exploits. They’re simply adapting them for AI-driven apps.
SQL Injection-Style Application Vulnerabilities
Just like with traditional web apps, if an AI application uses user input to build a database query without proper validation, a hacker could still use SQL injection to access, modify, or delete sensitive data. The AI component doesn’t make the application immune, it adds another layer of complication to hide these classic flaws.
Excessive Permissions and Weak Access Controls
When integrating AI models, developers often grant them more permissions than necessary. For example, a model might be given access to an entire database when it only needs to read a specific table. If an AI model is compromised, this excessive privilege, combined with weak role-based access controls, can allow attackers to gain access to everything it has permission to. This leads to a much larger breach than the initial vulnerability might suggest.
>> Related Read – DEF CON 33: Hacking Highlights and What’s Next for Cybersecurity
Why Traditional App Security Isn’t Enough
The old ways of securing applications weren’t developed for the realities of AI. Here’s why.
Legacy Tools Don’t Understand AI Workloads
Traditional application security tools, such as static or dynamic testing, were designed to uncover issues in conventional codebases. While these are still valuable, they fall short when it comes to AI-driven environments. These tools rarely account for application vulnerabilities unique to AI, including poisoned training data, insecure model endpoints, or malicious pre-trained packages. As a result, development teams may receive a clean bill of health from legacy scanners while critical risks remain hidden in their AI workloads.
AI Models and Data Pipelines Add Blind Spots
Security teams often have limited visibility into model training environments, inference pipelines, and the numerous integrations typing applications to AI services. Without clear insight into how data flows through these systems or how models interact with external components, organizations risk leaving major gaps unaddressed. These blind spots give attackers room to maneuver, exploiting weaknesses that traditional tools simply can’t see.
The Business Case for Securing AI Applications
The risks of not securing AI applications can impact a company’s regulatory standing, reputation, and financial health.
Insecure AI apps can lead to data breaches, exposing private customer information and resulting in significant fines under regulations and laws like GDPR or CCPA. An infiltrated system can also be used to disrupt business operations, leading to costly downtime and a loss of intellectual property.
Beyond financial penalties, customers lose trust in a company that can’t protect their data, leading to a loss of business and long-term brand erosion.
To avoid these outcomes, businesses must prioritize AI application security from the beginning. This means adopting a “shift left” approach, weaving security into the early phases of a development cycle rather than treating it like an afterthought.
As Shaun Bertrand, Pellera’s VP of Cybersecurity explains, “I see application security resiliency and effectiveness as one of the more favorable areas of productivity and enhancements overall.” This proactive approach not only mitigates risk but leads to more robust and reliable applications, ultimately becoming a competitive advantage.
Conclusion: Close the AI Code Gap Before Attackers Exploit It
The security gaps in AI code are a real and growing risk that attackers are actively exploiting. A reactive, “find-and-fix” approach is no longer sustainable. The path forward is a proactive, “shift-left” strategy, where security is built directly into the development lifecycle from the start.
Ready to get proactive about your AI application security? Learn how Pellera can help you close the security holes in AI application code by providing a comprehensive platform for securing your AI-enabled applications from development to deployment.
>> Related read – Transformation Through Leadership: Key Insights from the “Edge of IT” Season 2 Premiere