Insight
Insights
Feb 11, 2025

Beyond the OWASP Top 10

Table of contents
Authors
Chris Ward
Chris Ward
Field CTO

What is OWASP?

The Open Web Application Security Project (OWASP) is a worldwide nonprofit organization dedicated to improving the security of software. With a mission to educate and empower organizations to build secure applications, OWASP provides resources and guidelines for developers, security professionals, and organizations. Among its most recognized contributions is the OWASP Top 10, a curated list that highlights the most critical security risks to web applications.

Recently, OWASP has extended its focus to the emerging realm of Large Language Models (LLMs), outlining the OWASP Top 10 for Large Language Model Applications. This Top 10 aims to identify and mitigate key threats associated with the deployment and use of LLMs, offering guidance for organizations navigating the complex landscape of generative AI. However, while it lays a foundational framework, it leaves out some important considerations, particularly regarding the risks associated with workforce usage of these powerful technologies.

Beyond the OWASP Top 10 for Large Language Model (LLM) applications

In the rapidly evolving landscape of generative AI (GenAI), the OWASP Top 10 for Large Language Model (LLM) Applications has become a crucial resource for understanding potential security risks. However, as adoption grows within organizations, it's becoming clear that this list, while valuable, may not be telling the whole story when it comes to organizational risk management – especially regarding workforce use of generative AI applications.

Let’s make this clear up front, this blog post is NOT meant to deemphasize or denigrate the OWASP Top 10 for Large Language Model Applications. It covers critical areas like prompt injection, sensitive information disclosure, and misinformation. These are all valid points that any organization implementing LLM applications should consider.

However, as workforce use of GenAI increases across organizations one might find that the risks described in this OWASP Top 10 primarily apply to external-facing applications or the development process itself. Companies are still left to either figure out how to address the remaining risks of workforce GenAI usage or maybe even worse, unaware of the risks of unbound GenAI usage within a workforce.

Consider these questions, which many organizations have likely considered: How much of your workforce is using ChatGPT or other public AI tools for work-related tasks? Are they inadvertently sharing sensitive company data with these models? How can you monitor or track this usage to ensure compliance with data privacy regulations? As the landscape of legislation evolves, these questions become increasingly critical. Unfortunately, this is where the OWASP list falls short, as it does not explicitly address the unique challenges and risks associated with workforce GenAI usage.

Taking a closer look

Let’s take an in-depth look at a few of the OWASP Top 10 for Large Language Model (LLM) Applications that are most applicable to the workforce use of GenAI and where they each fall short.

  • LLM01:2025 Prompt Injection
    • OWASP focus: Protecting against malicious users manipulating AI outputs.
    • What it misses: The fact is, your employees most likely aren't trying to trick the GenAI service, they're just trying to get their work done. The more likely risk here is accidental data exposure when using public AI tools.
  • LLM02:2025 Sensitive Information Disclosure
    • OWASP focus: Preventing AI from revealing sensitive data in its responses.
    • What it misses: Overlooks the risk of employees inadvertently inputting sensitive information into public AI models. Again, this isn’t necessarily malicious by nature, rather they are trying to get work done and copy and pasted something without checking it carefully.
  • LLM03:2025 Supply Chain
    • OWASP focus: Ensuring the security of AI models and tools in development.
    • Where it misses: While important for development, this doesn't address the "shadow AI" your workforce might be using without IT's knowledge. If you block a tool the workforce wants or needs, they may find other ways to use it outside of the organization’s ability to monitor or control.
  • LLM05:2025 Improper Output Handling
    • OWASP focus: Cautioning against blind trust in AI outputs.
    • Where it misses: Lacks guidance on how employees should critically evaluate AI-generated content. How do employees and organizations as a whole validate the outputs of these services?
  • LLM08:2025 Vector and Embedding Weakness
    • OWASP focus: Securing AI's data relationship understanding.
    • Where it misses: Doesn't cover implementing role-based access for AI features and context data across the workforce. How do organizations ensure that only the necessary members of a workforce have access to specific public services and data sources?
  • LLM10:2025 Unbounded Consumption
    • OWASP focus: Preventing external abuse of AI resources.
    • Gap: Doesn't address managing and optimizing internal AI usage across the workforce. How do organizations plan for and manage costs around private models? Per user per month pricing on some of the most well-known GenAI services can range from $30-$60/mo. Consumptive pricing can reduce this cost by up to 80% but how do companies implement it when it might require significant personnel and 12 or more months to develop?

How it could be improved

While the OWASP Top 10 provides valuable insights for developers and security teams, it leaves a significant gap when it comes to managing everyday workforce use of GenAI. Here's what's missing:

  • Data Privacy in employee GenAI use: Guidelines or, preferably, tools for ensuring workers don't accidentally share sensitive information with public GenAI tools.
  • Role-based access: Implementing controls to ensure employees only use GenAI services and data appropriate for their job functions.
  • GenAI analytics and usage monitoring: Tracking how and when employees are using AI tools to optimize resources and identify potential risks.
  • GenAI literacy training: Educating employees on the capabilities, limitations, and potential risks of AI tools they're using.

The OWASP Top 10 for LLM Applications is an excellent starting point, but as AI becomes increasingly integrated into a workforce’s daily life, organizations need to broaden their perspective on GenAI security and governance.

By addressing the gaps in workforce GenAI uses and implementing comprehensive governance strategies, we can harness the full potential of AI while keeping our organizations safe, compliant, and efficient. In the world of GenAI, some of the most significant risks – and opportunities – arise from within our own workforce.