AI Security Rules & DLP

How does auto-sanitization work?

Auto-sanitization automatically replaces detected sensitive data with safe placeholder tokens (like {{PATIENT_NAME}} or {{SSN}}) before the prompt reaches an AI tool. When a guardrail rule detects sensitive content, instead of just blocking or warning, the sanitization option replaces the matched text with a descriptive {{PLACEHOLDER}} token. The original prompt structure and intent are preserved while removing the actual sensitive data. Sanitized prompts are logged in the audit trail with both the original detection and the replacement applied.