Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Security for AI: A Practical Guide to Enforcing Your AI Acceptable Use Policy



Abstract image to illustrate a blog about AI Acceptable Use Policies. It aims to evoke themes of security, data flow, and controlled access, using a modern, digital aesthetic.

An AI acceptable use policy can help your organization mitigate the risk of employees accidentally exposing sensitive data to public AI tools. Benchmark your organization’s policy against our best practices and discover how prompt-level visibility from Tenable AI Exposure eases policy enforcement.

Key takeaways:

  1. An AI acceptable use policy governs the appropriate use of generative AI tools for employees. It defines the specific tools employees can and can't use and provides guidelines for the secure, responsible, and ethical use of AI.
     
  2. An AI acceptable use policy should include rules on data classification, approved tools, usage guidelines, and accountability.
     
  3. Tenable AI Exposure, part of the Tenable One Exposure Management Platform, provides security leaders with tools to enforce their AI acceptable use policy, including discovery of all generative AI usage throughout the organization, deep visibility into what data is being exposed, and enforcement capabilities.

OpenAI’s release of ChatGPT in November 2022 was a seismic event. Built on the GPT 3.5 large language model (LLM), ChatGPT quickly became the fastest growing consumer product ever, according to UBS, with 100 million monthly users in 60 days. In a similar span of time, the risks of this groundbreaking technology also became apparent.

In early 2023, two employees of an electronics company shared confidential source code with ChatGPT, effectively making their source code part of the LLM’s training data without realizing it. The incident, which was widely reported in the media, prompted many organizations to ban public AI tools. This was not an isolated incident. A global survey conducted by the University of Melbourne in early 2025 showed that 48% of employees had uploaded sensitive information to public generative AI tools and 44% had knowingly violated corporate AI policies.

All of this highlights the urgency for organizations to develop and implement a clear and robust AI acceptable use policy.

What is an AI acceptable use policy (AUP)?

An AI acceptable use policy provides guidelines on the correct, ethical, and legal use of AI technologies within your organization. An AI governance council, led by a senior member of the IT team and including stakeholders from across the organization, should manage the policy.

An AI acceptable use policy should include: 

  • A list of approved tools (including tools approved for companywide use and those approved for specific business units)
  • Types of data that can and can’t be shared
  • Rules that govern data handling
  • Guidelines for using AI to generate content
  • Consequences for policy violations.

Watch the video below for guidance on creating an AI acceptable use policy.

How to secure non-human identities and prevent misconfigured managed service accounts in Active Directory

 

An AI acceptable use policy helps you manage the risk of data exposure and intellectual property loss by clearly defining what employees can and can’t do. It can also help you maintain compliance with the data handling provisions of regulatory requirements, such as General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA).

Instead of functioning as an onerous rulebook, a well-crafted AI acceptable use policy should empower employees to take advantage of the benefits of AI while keeping risks in check — whether they’re using work devices or their own personal ones. 

View the on-demand webinar Securing the Future of AI in Your Enterprise and download our one-page AI Acceptable Use Policy guide

The core components of an AI acceptable use policy

You know you need a guide to help govern AI usage. But what should it include? Based on many conversations with customers that have wrestled with this very issue, we encourage AI governance councils to include these core components in their AI acceptable use policies:

  • Data classification: This is about what data you permit and prohibit to use with AI tools. Your policy should clearly state what employees can’t use in AI, such as customer personally identifiable information (PII), source code, financial records, and M&A strategy documents. Spell this out in detail because your company’s needs are probably unique. But don’t limit it to things employees can’t do. Tell them the kinds of tasks they can do.
  • Approved tools: Provide employees with a complete list of sanctioned and approved AI applications (e.g., tools like Grammarly, ChatGPT Enterprise, and Zoom AI are often approved while DeepSeek, which is banned by several US states and government agencies, is often not approved). For approved tools, tell employees the use cases for each one and include internal or vendor-provided training videos. Maybe one is good for coding and another for image creation. Along the way, make it clear in your policy that unsanctioned tools are not permitted. In addition, provide a mechanism for employees to request approval of AI tools.
  • Usage guidelines: You should clearly define appropriate business use cases (drafting marketing copy might be a good example) and inappropriate ones (summarizing sensitive meeting notes is probably not a good idea).
  • Accountability: What happens if there’s a policy violation? Outline the consequences and be specific. If there is a violation, there are a number of ways of handling it. But we suggest a stepped approach.
    • For example, on a first violation, give the user a gentle warning and refer the user to your AUP. Also, ask for justification for the tool and share the process for submitting a request for review to the AI governance council. If justification for the tool is clear, let the user know that the tool will go through an approval process (including budget and security assessments). Subsequent to that, the council will update the policy and notify users.
    • For a second violation, you should block the user (and other employees) from accessing the tool.
    • If there’s a third violation, you should monitor repeated attempts and let the employee's manager know. Offer the employee training and awareness resources for acceptable use of AI tools.

Benchmark your AI acceptable use policy against best practices

Ensure your organization’s AI acceptable use policy covers the following elements:

The use of AI toolsEnsure you have a list of approved and prohibited tools that is readily available. Provide a mechanism for employees to submit requests for tools they would like the organization to consider for approval.
Ethical principlesOutline your organization’s view on accountability, transparency, fairness, safety, privacy, and security.
Requirements for AI useLay out the three categories of AI use: permitted (use is unrestricted), prohibited (use is not allowed), and controlled (use requires authorization). 
Employee responsibilitiesMake employee responsibilities for using AI clear, including checking for accuracy and bias and labeling any code appropriately. Above all, make it clear to employees that the organization will not tolerate unlawful or unethical uses of AI (i.e., disinformation, manipulation, discrimination, defamation, invasion of privacy).
Data privacy and securityCreate guidelines that respect privacy rights and protect the security of data regardless of the AI use case within the organization. 
Training and awarenessInclude information that underscores your commitment to training on the risks and why you don't permit the use of unsanctioned AI tools, including concerns about data exposure, privacy, third-party tracking, security (i.e., vulnerable or unsafe AI tools that are easily compromised, and malicious threat actors that can use an AI tool to gain a foothold in your organization). Make sure all employees review and understand available training resources.

Source: Tenable, October 2025

How to enforce your AI acceptable use policy

Now that you understand the risk and you have an acceptable use policy in place that you’ve communicated to employees, what’s next? You need to enforce it. This can be a challenge. But let us guide you through it because it’s important to get this step right.

Here are two keys to ensuring your policy works:

  • Continuous training: Educate employees on the why behind the policy, focusing on real-world risks. There are many examples of employees sharing source code, meeting recordings, and other confidential information with public AI tools without realizing the consequences of their actions, but because we don’t want to shame anyone, we’re not going to list them here.
  • Proactive monitoring and visibility: This is the technical core. You can't enforce a policy you can't see. You’ll need to move quickly beyond simply blocking an app to having granular visibility of what employees are doing, as well as the AI platforms and AI agents they’re using. You’ll also need to know the third-party tools and AI workflows (both vetted and unvetted) they access to determine risks at the input (the prompt), operations (what the AI tool does as a result of the prompt), and the output (the result produced by the AI tool). You have to be able to know all of this to understand if someone is violating a policy. An employee might be asking for help with Python or they might be copying and pasting your company’s proprietary algorithm. Only this level of visibility will allow you to discern the difference.

An AI acceptable use policy without enforcement is just a document. To truly secure your organization, you need to combine a clearly articulated policy with a proactive exposure management program that provides complete visibility into how your team is using these powerful new tools.

How Tenable AI Exposure can help you monitor compliance with your organization’s AI acceptable use policy

You have to secure AI to manage your organization’s risk. And you need to understand how your employees are using AI. So, how do you gain the visibility, context, and control you need to manage it all? And how can you govern AI usage, enforce policies, and prevent exposures?

Tenable AI Exposure directly addresses these challenges. It provides the essential capabilities needed to protect sensitive information and enforce acceptable use policies:

  • Discovery: Comprehensive identification of all generative AI usage throughout the organization, including shadow AI instances that might otherwise go undetected. This ensures a complete understanding of where AI is being leveraged, both intentionally and unintentionally.
  • Deep visibility: Crucial, prompt-level analysis to reveal precisely what data is being exposed when employees interact with generative AI tools. This granular insight enables organizations to understand the nature of the information being shared and identify potential risks.
  • Enforcement: Delivers the necessary data to effectively enforce your organization's acceptable use policy and safeguard sensitive information. By providing actionable intelligence, Tenable AI Exposure helps security teams implement and uphold governance rules, preventing data leakage and misuse.

With these capabilities, you gain a proactive approach to managing the complexities of generative AI within the enterprise. As a result, organizations can embrace innovation while maintaining robust security for AI and compliance standards.

If you’re a Tenable One customer and you’re interested in getting an exclusive private preview of Tenable AI Exposure, fill out the short form on the Tenable AI Exposure page. We’ll get right back to you. This offer is limited to Tenable One customers.

Learn more 


Cybersecurity news you can use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.

× Contact our sales team