HIPAA Compliance and AI: What Teams Must Know
Healthcare organizations are rapidly adopting AI tools for clinical documentation, patient communication, research, and administrative tasks. The productivity gains are significant — AI can draft discharge summaries, summarize patient histories, translate medical jargon for patient-friendly communications, and automate prior authorization workflows. But every one of these use cases involves Protected Health Information, and HIPAA does not grant exceptions for productivity tools.
The HIPAA Risk with AI Tools
Under HIPAA, any entity that creates, receives, maintains, or transmits PHI must safeguard it. When a nurse pastes a patient's medical history into ChatGPT to generate a discharge summary, that PHI is transmitted to OpenAI's servers. Unless OpenAI has signed a Business Associate Agreement with your organization, that transmission is a HIPAA violation — regardless of the intent.
The Office for Civil Rights has made clear that "lack of intent" does not excuse HIPAA violations. The regulation requires that covered entities implement administrative, physical, and technical safeguards to prevent unauthorized PHI disclosure. An employee using an unsanctioned AI tool to process PHI is, legally, an unauthorized disclosure.
The 18 HIPAA Identifiers
HIPAA defines 18 types of identifiers that constitute PHI when linked to health information. AI DLP systems must detect all of them:
- Patient names
- Geographic data smaller than a state
- Dates (except year) related to an individual
- Phone numbers
- Fax numbers
- Email addresses
- Social Security numbers
- Medical record numbers
- Health plan beneficiary numbers
- Account numbers
- Certificate/license numbers
- Vehicle identifiers and serial numbers
- Device identifiers and serial numbers
- Web URLs
- IP addresses
- Biometric identifiers
- Full-face photographs
- Any other unique identifying number or code
If any combination of these identifiers appears in a message being sent to an AI tool alongside health-related context, it constitutes PHI and must be protected.
BAAs and AI Providers
A Business Associate Agreement is the legal mechanism that allows a covered entity to share PHI with a third party for treatment, payment, or healthcare operations. Without a BAA, sharing PHI with an AI provider is prohibited.
The landscape is evolving. Some AI providers now offer enterprise tiers with BAA-eligible plans. However, free tiers and standard plans for consumer AI tools do not include BAAs and explicitly state that user inputs may be used for model training. This means the same AI tool might be HIPAA-compliant at one pricing tier and non-compliant at another.
Your AI governance framework must track which providers offer BAAs, which pricing tiers are eligible, and ensure that only BAA-covered tools are used for PHI-adjacent tasks.
Technical Safeguards for AI Usage
HIPAA requires technical safeguards including access controls, audit controls, integrity controls, and transmission security. Here is how each applies to AI tool usage:
Access controls. Not every employee needs AI access for PHI-related tasks. Implement role-based access that limits AI tool usage with PHI to clinical and administrative staff who have completed HIPAA training.
Audit controls. Log every AI interaction, including the tool used, the user, the timestamp, and whether DLP rules triggered. These logs must be retained and available for compliance reviews. TeamPrompt's audit trail is designed specifically for this requirement.
Integrity controls. Verify that AI-generated content (discharge summaries, patient communications, clinical notes) is reviewed by a qualified professional before being entered into the medical record. AI hallucinations in clinical contexts can have patient safety implications.
Transmission security. Deploy DLP scanning that detects the 18 HIPAA identifiers in real time, before any message is transmitted to an AI provider. Block transmissions that contain PHI to non-BAA-covered tools.
De-identification: The Safe Path
HIPAA provides a safe harbor: de-identified data is not PHI and can be shared freely. Data is de-identified when all 18 identifier types are removed or when a qualified statistician certifies that the risk of identification is very small.
For AI use cases, this means you can use AI tools safely by de-identifying data before submission. Replace patient names with generic labels, remove dates, strip geographic details below state level, and redact all other identifiers. The AI still receives enough context to generate useful outputs — it does not need to know the patient's real name to draft a discharge summary template.
Automated redaction through DLP makes this practical at scale. Rather than relying on employees to manually de-identify data (error-prone and time-consuming), a DLP system can automatically replace detected identifiers with safe placeholders before the message is sent.
Building a HIPAA-Compliant AI Workflow
A compliant AI workflow for healthcare combines policy, training, and technical controls:
- Step 1: Establish which AI tools have BAAs and which tiers are eligible
- Step 2: Deploy DLP with the HIPAA compliance pack to detect all 18 identifier types
- Step 3: Configure enforcement rules — block PHI to non-BAA tools, redact to BAA-covered tools where appropriate
- Step 4: Train staff on the policy with healthcare-specific examples
- Step 5: Build a prompt library with pre-approved clinical prompt templates that include de-identification reminders
- Step 6: Review DLP logs monthly and conduct quarterly compliance assessments
The Cost of Non-Compliance
HIPAA penalties range from $100 to $50,000 per violation, up to $1.5 million per year for repeated violations of the same provision. Beyond financial penalties, breaches require notification to affected individuals, HHS, and potentially the media. The reputational damage to a healthcare organization can be devastating.
Compared to these costs, deploying AI-specific DLP and governance controls is a minor investment. TeamPrompt's HIPAA compliance pack detects all 18 identifier types out of the box, with real-time blocking, automated redaction, and a complete audit trail. View pricing or start a free workspace to protect your healthcare team's AI usage today.