Back to Blog
guide

How to Create an AI Acceptable Use Policy

March 14, 20268 min readTeamPrompt Team
Professional reviewing policy documents at desk

Every organization that allows employees to use AI tools needs an Acceptable Use Policy (AUP). Without one, you are relying on individual judgment to determine what data can be shared with ChatGPT, which tools are safe to use, and what constitutes responsible AI usage. Individual judgment varies wildly — and so do the risks.

This guide walks you through creating an AI AUP that is clear enough for employees to follow and specific enough for your security team to enforce.

Why You Need a Dedicated AI Policy

Your existing IT acceptable use policy probably covers software installation, data handling, and internet usage. But AI tools create scenarios that generic policies do not address:

  • Is pasting customer data into an AI tool a "data transfer to a third party"?
  • Does using AI to draft client communications require disclosure?
  • Who owns the output of an AI-assisted work product?
  • Can employees use personal AI accounts for work tasks?
  • What happens to data shared with free-tier AI tools that train on inputs?

A dedicated AI policy answers these questions explicitly rather than leaving them to interpretation.

Section 1: Scope and Applicability

Define who the policy applies to and what it covers. Be explicit:

Applies to: All employees, contractors, interns, and temporary staff who use AI tools for work-related tasks, regardless of whether those tools are accessed on company devices or personal devices.

Covers: All generative AI tools including but not limited to ChatGPT, Claude, Gemini, Copilot, Perplexity, Midjourney, and any AI tool accessed through a browser, API, or application.

Does not cover: AI features embedded in approved enterprise software (e.g., Grammarly, Salesforce Einstein) that have been separately vetted and approved by IT.

Section 2: Approved AI Tools

List every AI tool that has been vetted and approved. For each tool, specify:

  • The tool name and approved tier (free, pro, enterprise)
  • What data classifications it may handle (public, internal, confidential)
  • Whether a Business Associate Agreement or Data Processing Agreement is in place
  • Any tool-specific restrictions (e.g., "code generation only," "no customer data")

Update this list quarterly as new tools are evaluated and existing tools change their terms.

Section 3: Prohibited Data

This is the most important section. Be specific about what employees must never share with AI tools:

Always prohibited:

  • Credentials — passwords, API keys, tokens, database connection strings
  • Customer PII — Social Security numbers, financial account numbers, health records
  • Trade secrets — proprietary algorithms, unpublished product details, internal pricing models
  • Legal privileged information — attorney-client communications, litigation strategy
  • Regulated data — PHI (HIPAA), cardholder data (PCI-DSS), ITAR-controlled information

Permitted with caution:

  • Internal business data — meeting notes, project plans, process documentation (approved tools only)
  • De-identified data — customer data with all identifiers removed
  • Public information — published content, public documentation, open-source code

Use concrete examples. "Do not paste customer Social Security numbers into ChatGPT" is clearer than "do not share PII with unauthorized processors."

Section 4: Acceptable Use Cases

Describe what employees are encouraged to use AI for. Positive guidance is as important as restrictions:

  • Drafting and editing internal communications
  • Brainstorming and ideation
  • Summarizing public or internal documents
  • Code generation and debugging (non-proprietary code)
  • Research and analysis using public data
  • Creating templates and frameworks

For each use case, note any conditions. For example: "Summarizing customer support tickets is acceptable only when customer names and account numbers are removed first."

Section 5: Technical Controls

Policy alone does not prevent data leaks — technical controls do. Document the controls your organization has deployed:

DLP scanning. All AI tool interactions are scanned in real time by the TeamPrompt browser extension. Messages containing prohibited data patterns are blocked before they reach the AI provider.

Audit logging. All AI tool interactions and DLP events are logged for compliance review. Employees should be aware that their AI usage is monitored and auditable.

Access controls. AI tool access is managed through the organization's approved tool list. Unapproved tools may be blocked at the network level.

Referencing technical controls in the policy sets expectations and reinforces that the organization takes enforcement seriously.

Section 6: AI Output Review

AI tools can produce inaccurate, biased, or fabricated outputs. Your policy should require:

  • All AI-generated content must be reviewed by a human before use in client-facing or external communications
  • AI outputs must not be presented as original human work without disclosure (where required by company policy or regulation)
  • Employees are responsible for verifying factual claims in AI-generated content
  • AI-generated code must pass the same review and testing processes as human-written code

Section 7: Reporting and Consequences

Define the process for reporting violations and the consequences:

Reporting. Employees who become aware of an AI policy violation — their own or someone else's — should report it to their manager or the security team within 24 hours. Self-reporting mitigates consequences.

Consequences. Violations are addressed on a severity scale: first-time inadvertent violations result in additional training, repeated violations result in restricted AI access, and intentional violations of high-severity rules are handled through the standard disciplinary process.

Rolling Out the Policy

A policy only works if people know about it and understand it. Roll out in three phases:

  • Announce and distribute — Email the policy to all employees, post it on the intranet, and add it to the employee handbook
  • Train — Conduct a 30-minute training session walking through the policy with real examples and Q&A
  • Acknowledge — Require employees to sign an acknowledgment that they have read and understood the policy

Review and update the policy every six months as AI tools, regulations, and organizational needs evolve.

TeamPrompt makes policy enforcement automatic with real-time DLP scanning, audit logging, and usage analytics. Create your workspace and pair your AI acceptable use policy with the technical controls to enforce it.

AI policy
acceptable use
IT management
governance
compliance
template

Ready to secure and scale
your team's AI usage?

Create a free workspace in under two minutes. No credit card required.