Insight
Insights
Jan 6, 2025

Understanding GenAI usage: Why businesses need visibility

Table of contents
Authors
Oliver Gould
Oliver Gould
Senior Software Engineer

In the past two years, we've witnessed hundreds of millions of people adopting Generative AI (GenAI) technologies. From ChatGPT to Gemini, these tools have become a part of many professionals' daily workflows, myself included. As an engineer, I've experienced firsthand the meaningful productivity boost that comes from leveraging Large Language Models (LLMs) for tasks like understanding new libraries, deciphering complex Typescript declarations, and summarizing best practices across various technologies. 

While I dearly miss StackOverflow, I can’t help but love the fact that I now have an eager and friendly assistant who always seems to point me in the right direction, regardless of various imperfections or ambiguities in my questions or requests. LLMs seem to magically adhere to the cardinal rule of starting by attempting to answer the question that was asked, and then layering in additional comments or directions.

The challenge with GenAI adoption

Here at SurePath AI, and like other tech-savvy companies, we’ve deeply integrated AI into various workflows. We are genuinely interested in learning more about what GenAI gets right, wrong and confidently incorrect. However, many companies are just beginning to think about the potential reach and impact of GenAI within their own networks. IT professionals and managers know that employees are using these tools, but don’t know which are actually in use. In short, they lack meaningful visibility into:

  • Who uses GenAI, how frequently, and for which tasks
  • What information they share with which LLMs
  • The potential risks these interactions pose

Common approaches that fail

In response to concerns about this particular brand of data leakage, a common first step for many companies is to write a memo kindly asking employees to be aware of what they send to LLMs, describing in well-intentioned bullet points what information is considered sensitive, intellectual property, or should otherwise not be shared.

On the other end of the spectrum, some companies fear the black-box nature of GenAI and decide to start with a zero-tolerance policy of strictly prohibiting all use.

Both approaches fail due to a lack of real visibility into actual usage patterns. Without concrete telemetry, there is no way to know if employees are following guidelines, or how much productivity is impacted. Employees often disregard or misunderstand guidelines, sometimes deciding the benefits outweigh perceived risks. The seemingly private nature of GenAI interactions exacerbates this issue, as users may unknowingly share sensitive information. It’s just too easy to send something personal or confidential with the feeling that there’s little or no risk involved.

Why visibility matters 

Rather than taking an approach which is either too lax or too strict, adding visibility to GenAI usage allows companies to make informed and enforceable decisions. Visibility is crucial because it enables you to:

  • Assess risks by understanding which GenAI services are in use, both by individual employees, and departments across your company
  • Generate risk profiles that combine employee and group definition, and task content, action and intent
  • Identify and source potential sensitive data and intellectual property leaks
  • Gain productivity insights by capturing beneficial use cases and opportunities for broader adoption
  • Refine policies based on actual usage patterns

We do this at SurePath AI by:

  • Seamlessly integrating with your enterprise network
  • Providing detailed insights into GenAI usage across your organization
  • Identifying specific services, users, and departments utilizing GenAI
  • Analyzing individual task intent, content, and risk
  • Generating well-qualified risk profiles based on your actual usage and our catalog of hundreds of publicly available LLMs and GenAI services

To visibility and beyond

A quick look at our public service catalog reveals that 6 out of 10 services train on user data, including ChatGPT and Gemini, and 4 out of 10 allow unmoderated use, meaning they do not have any built-in guard rails to prevent the model from responding when receiving a prompt that is considered dangerous or malicious.

There’s no doubt that developing a GenAI acceptable-use policy is a great first step. As you invest and increase in your visibility, you can further develop your security posture by making informed decisions about:

  • Which GenAI services to allow or block, for which employees and departments
  • What types of sensitive data, tasks and intents to label with risk levels
  • How to respond when you detect potentially risky interactions

This is where SurePath AI comes in. In just about 20 minutes, we can connect your enterprise network to our platform, giving you immediate insight into your company's GenAI usage. You'll see which services are being used, by whom, and even the content of these interactions. We enable you to gather valuable information without disrupting your employees' workflows or access. It's a non-intrusive way to understand your current GenAI landscape and potential risks.

Overall opportunity vs. risk

Regardless of where you may land in the “If you can’t measure it you can’t manage it” debate, there’s no question that blocking access to GenAI comes with a high opportunity cost, and asking your workforce to self-manage their GenAI interactions is ineffective and time-consuming.

Start with visibility, and policy will fall into place.