Vault API Documentation
Server-side prompt proxy with AES-256-GCM encryption. Your prompts never leave the server. Clients only interact through the API.
Quick start
1. Install
cd prompt-vault
npm install 2. Configure
cp .env.example .env
# Generate a secure API token:
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
# Edit .env:
VAULT_SECRET=your-strong-random-secret
ANTHROPIC_API_KEY=sk-ant-api03-...
API_TOKENS=your-generated-token 3. Create prompts
Add .md files to prompts/raw/ with frontmatter:
---
id: my-prompt
name: My Prompt
author: Your Name
copyright: (c) 2024 Your Name
version: 1.0.0
description: What this prompt does
---
Your prompt content here.
Use {{variable}} for dynamic values. 4. Encrypt and start
npm run encrypt # Encrypts prompts into vault.enc
npm start # Starts the API server on port 3700 API Endpoints
/api/run Auth required Execute a prompt through the Anthropic API. The prompt content is never returned to the client. The AI response is watermarked with invisible zero-width Unicode characters.
Request
curl -X POST http://localhost:3700/api/run \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"prompt_id": "seo-article",
"variables": {
"topic": "AI Security",
"keyword": "prompt protection",
"word_count": "800"
}
}' Request body
| Field | Type | Required | Description |
|---|---|---|---|
prompt_id | string | yes | ID of the prompt to execute |
variables | object | no | Key-value pairs to interpolate into the prompt template |
model | string | no | Anthropic model override (default: claude-sonnet-4-20250514) |
Response
{
"response": "...(watermarked AI response)...",
"prompt_id": "seo-article",
"model": "claude-sonnet-4-20250514",
"watermarked": true,
"usage": { "input_tokens": 150, "output_tokens": 800 }
} /api/prompts Auth required List all available prompts. Returns metadata only -- prompt content is never exposed.
Request
curl http://localhost:3700/api/prompts \
-H "Authorization: Bearer YOUR_TOKEN" Response
{
"prompts": [
{
"id": "seo-article",
"name": "SEO Article Writer",
"author": "Belkis Aslani",
"copyright": "(c) 2024 Belkis Aslani",
"version": "1.0.0",
"description": "Generates SEO-optimized articles"
}
]
} /api/health Public Health check endpoint. No authentication required.
{ "status": "ok", "prompts_loaded": 3, "uptime": 1234 } Security details
Encryption
Prompts are encrypted with AES-256-GCM. Key derivation uses scrypt with a 32-byte random salt. Each encryption uses a fresh IV. The GCM authentication tag ensures both confidentiality and integrity.
Vault format: [salt:32][iv:16][tag:16][ciphertext]
Authentication
API tokens are hashed with SHA-256 before comparison. Raw tokens are never stored or logged. Request logs only show the first 8 characters of the hash for debugging.
Header: Authorization: Bearer <token>
Watermarking
Every AI response is embedded with invisible zero-width Unicode characters that encode the prompt ID, timestamp, and client token hash. The watermark survives copy-paste.
Characters: U+200B U+200C U+200D
Response headers
Every response includes legal copyright headers, content-type protection, frame denial, and no-cache directives. Rate limiting is set to 30 requests per minute per IP.
X-Prompt-License / X-Prompt-Copyright
Error codes
| Status | Meaning | Resolution |
|---|---|---|
400 | Missing prompt_id | Include prompt_id in request body |
401 | Missing Authorization header | Add Bearer token to request header |
403 | Invalid API token | Check that your token matches one in API_TOKENS |
404 | Prompt not found | Verify prompt_id exists via GET /api/prompts |
429 | Rate limit exceeded | Wait and retry (30 req/min limit) |
502 | Anthropic API auth failed | Check ANTHROPIC_API_KEY in .env |