Complete security
for AI prompts.
Two layers of protection that no other open-source tool combines:
Two pillars of prompt security
Prompt-Armor and Prompt-Vault solve two fundamentally different problems. Together, they provide complete protection for your AI prompts.
PROMPT-ARMOR
Tamper-EvidenceEncodes prompts into Base64 with a SHA-256 integrity hash. If anyone changes a single character, the AI detects it and refuses execution.
Analogy: A tamper-evident seal on a package. You can see the contents, but you know if someone opened it.
PROMPT-VAULT
ConfidentialityServer-side API proxy with AES-256-GCM encrypted storage. Clients send a prompt ID, the server executes it. The prompt content is never exposed.
Analogy: A bank vault. The contents are completely hidden. You interact through a controlled interface.
Architecture
Armor Flow
Client-side. Paste into any AI chat. Tamper = detection.
Vault Flow
Server-side. Prompt never leaves the server.
How Armor works
Write your prompt
Write your AI prompt normally. Any prompt works -- landing pages, APIs, emails, code generation.
Armor encodes it
The prompt gets Base64-encoded with a SHA-256 integrity hash and a strict system instruction wrapper.
Tamper = failure
Change one character in the encoded block and the math breaks. The AI detects the corruption and refuses to execute.
The principle
--- BEGIN ARMOR BLOCK ---
SGVsbG8gV29ybGQhIFRoaXMgaXMgYSB0ZXN0
--- END ARMOR BLOCK --- AI decodes and executes correctly.
--- BEGIN ARMOR BLOCK ---
SGVsbG8gV29ybGQhIFRoaXMgaZMgYSB0ZXN0
--- END ARMOR BLOCK --- [PROMPT-ARMOR] Integrity check failed.
How Vault works
Encrypt your prompts
Write prompts as Markdown files. The CLI tool encrypts them into a single AES-256-GCM vault file.
Client sends prompt ID
Clients authenticate with a Bearer token and send a prompt ID + variables. They never see the prompt content.
Watermarked response
The server executes the prompt via the Anthropic API and returns the response with invisible zero-width character watermarking.
Generator
Paste your prompt below to generate a protected armor block. You can also verify existing blocks to check if they have been tampered with.
Use cases
Choose the right tool for your scenario -- or combine both for maximum protection.
Armor -- Solo developers and teams
Prompt marketplaces
Sell prompts that cannot be silently modified by buyers before claiming they do not work.
Team prompt libraries
Share standardized prompts across your team with guaranteed consistency.
Educational settings
Distribute assignment prompts to students that cannot be tweaked to get easier answers.
Open-source prompts
Publish prompts publicly with proof of authorship and integrity verification.
Vault -- Commercial providers
AI-powered SaaS products
Run proprietary prompts server-side. Customers use the output but never see the prompt engineering behind it.
Prompt-as-a-Service
Monetize your prompt library via API access. Watermarking traces every response back to the client.
Enterprise prompt governance
Central vault for approved prompts. Role-based access, audit logs, and rate limiting out of the box.
Embedded AI features
Ship AI capabilities in your app without exposing your competitive advantage -- the prompts stay on your server.
Security stack
| Layer | Technology | Tool | Protection |
|---|---|---|---|
| Integrity | SHA-256 + Base64 | Armor | Detects any modification |
| Encryption | AES-256-GCM + scrypt | Vault | Prompts encrypted at rest |
| Authentication | Bearer token (SHA-256) | Vault | Client identity verification |
| Watermarking | Zero-width Unicode | Vault | Response traceability |
| Rate Limiting | express-rate-limit | Vault | Abuse prevention |
What this is (and is not)
Prompt-Armor is tamper-evidence, not encryption. Base64 can be decoded by anyone. The protection comes from the SHA-256 hash and the system instruction contract -- if someone changes anything, the AI detects the corruption and refuses to execute.
Prompt-Vault is real cryptographic security. AES-256-GCM encryption means the prompt content is mathematically unreadable without the key. The prompt never leaves the server, never reaches the client.
Together they cover both sides of prompt security: integrity (did someone tamper with it?) and confidentiality (can someone read it?). No other open-source tool combines both in one project.