Paste a single ks_ token and proxy URLs into your agent's system prompt. The Keystore proxy resolves real API keys from the vault at request time. No SDK. No env vars. No framework plugin. One token, 100+ services.
The MCP ecosystem has reached 97 million monthly SDK downloads and over 10,000 registered servers. Every framework converges on the same broken pattern: dump real API keys into environment variables.
# Before: secret sprawl — every key exposed to the agent runtime
OPENAI_API_KEY=sk-proj-4f8b2c...
ANTHROPIC_API_KEY=sk-ant-api03-7d9e...
NEON_API_KEY=neon-kf82nd...
RESEND_API_KEY=re_8f2k4n...
STRIPE_SECRET_KEY=sk_live_51J3kd...
TAVILY_API_KEY=tvly-a8f3n2...
# After: one opaque token — real keys never leave the vault
KS_TOKEN=ks_a1b2c3d4e5f6789012345678901234567890abcdef1234567890abcdef12345678Expects OPENAI_API_KEY, ANTHROPIC_API_KEY in the environment. The LangGrinch vulnerability (CVE-2025-68664, CVSS 9.3) showed a prompt injection could extract every environment variable from a LangChain process.
Same env var pattern, with an extra wrinkle: known bugs demand OPENAI_API_KEY even when you've configured a completely different service. Your Anthropic-only crew still needs an OpenAI key in the environment.
Stores everything in .env files. No encryption at rest, no rotation mechanism, no access control beyond filesystem permissions. The security model is "hope nobody reads the file."
Single-service by design. If your agent needs to call Anthropic and OpenAI in the same workflow, you're stitching together two different credential management approaches manually.
The agent ecosystem is massive — but none of them have standardized how agents handle credentials.
Of production AI deployments, per Obsidian Security. If real keys are in the context, they will be extracted.
Recursive agent loops and stolen keys make headlines. Every agent is a new attack surface.
Three steps to prompt-based integration. No SDK to install, no environment variables to configure, no framework plugin to wire up.
In the Keystore dashboard, create an agent and select the services it needs — OpenAI, Anthropic, Resend, Neon, or any of 100+ others. Set a monthly budget and rate limits. Keystore generates a single ks_ token: 64 hex characters, stored as a SHA-256 hash.
Copy the system prompt block into your agent's instructions. It contains the proxy base URLs for each service and the agent's ks_ token. No SDK to install, no .env file to manage. Works with MCP servers, AutoGPT, CrewAI, LangGraph, OpenAI Agents SDK, or a plain LLM with tool-use.
When the agent calls a service URL, the request hits the Keystore proxy. The proxy validates the ks_ token, checks the agent's status, budget, and rate limits, then decrypts the real API key from the vault (AES-256-GCM) and injects it into the outgoing request. The agent never sees a real key.
This is the complete system prompt snippet. Five services configured, error handling included, one token for everything.
You have access to the following API services. Use these base URLs and
include the Authorization header with every request.
=== API Configuration ===
Authorization header (use for ALL requests):
Authorization: Bearer ks_YOUR_AGENT_TOKEN
OpenAI (chat completions, embeddings, image generation):
Base URL: https://proxy.keystore.io/v1/openai
Example: POST https://proxy.keystore.io/v1/openai/v1/chat/completions
Anthropic (Claude messages):
Base URL: https://proxy.keystore.io/v1/anthropic
Header: x-api-key: ks_YOUR_AGENT_TOKEN
Header: anthropic-version: 2023-06-01
Example: POST https://proxy.keystore.io/v1/anthropic/v1/messages
Resend (transactional email):
Base URL: https://proxy.keystore.io/v1/resend
Example: POST https://proxy.keystore.io/v1/resend/emails
Neon (serverless Postgres management):
Base URL: https://proxy.keystore.io/v1/neon
Example: GET https://proxy.keystore.io/v1/neon/projects
Vercel (deployments, domains, environment variables):
Base URL: https://proxy.keystore.io/v1/vercel
Example: GET https://proxy.keystore.io/v1/vercel/v13/deployments
=== Rules ===
- Always use the base URLs above. Never attempt to call service APIs directly.
- Always include the Authorization header (or x-api-key for Anthropic).
- If you receive a 429 status, wait 60 seconds before retrying.
- If you receive a 403 status, the agent token may be paused or revoked.
- Do not attempt to modify or decode the ks_ token.When your agent makes a request, it hits the Keystore proxy on Fly.io. Here's every step before the service sees the request.
# What happens when your agent calls OpenAI through the proxy:
curl -X POST https://proxy.keystore.io/v1/openai/v1/chat/completions \
-H "Authorization: Bearer ks_a1b2c3d4e5f6..." \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
# The Keystore proxy processes this in 18 steps:
#
# 1. Route resolution — map /v1/openai/* to api.openai.com
# 2. Token extraction — pull ks_ token from Authorization header
# 3. Agent resolution — SHA-256 hash → Redis cache → API fallback
# 4. Kill switch check — reject if agent is paused or revoked
# 5. IP allowlist — verify source IP if configured
# 6. Service check — confirm agent has access to openai
# 7. Circuit breaker — skip if upstream is failing, try fallback
# 8. Rate limit check — enforce per-agent RPM and RPD limits
# 9. Budget check — enforce monthly spending cap
# 10. Body size check — reject oversized payloads
# 11. Abuse detection — content safety scan
# 12. Credential lookup — decrypt real key from vault (AES-256-GCM)
# 13. Header injection — replace ks_ token with real API key
# 14. Forward request — send to api.openai.com
# 15. Circuit breaker — record success/failure
# 16. Stream response — pipe back to agent
# 17. Async logging — agent, service, path, status, cost, duration
# 18. Metering + alerts — track spend, trigger budget alertsSame proxy. Same security. Pick your path. All three converge on the same AES-256-GCM encryption, budget enforcement, and audit trail.
Paste proxy URLs and a ks_ token into the system prompt. No code, no SDK, no env vars. Works with any agent that can make HTTP calls.
Install @keystore/sdk and call interceptAll(). It patches globalThis.fetch to route all outgoing requests through the proxy.
For LangChain, CrewAI, or anything that reads env vars. setupEnv() rewrites OPENAI_BASE_URL and others to route through the vault.
# OpenClaw works with the same proxy as our SDK.
# Three paths, same infrastructure, same security.
# Path 1: OpenClaw (prompt-based) — no code required
# Paste proxy URLs + ks_ token into the system prompt.
# Works with any agent that can make HTTP calls.
# Path 2: SDK interceptAll() — zero-config code integration
import Keystore from "@keystore/sdk"
const ks = new Keystore({ agentToken: "ks_..." })
ks.interceptAll() // patches globalThis.fetch
# Path 3: SDK setupEnv() — for LangChain, CrewAI, frameworks
ks.setupEnv(["openai", "anthropic", "resend"])
# Rewrites OPENAI_BASE_URL, ANTHROPIC_BASE_URL, etc.
# Frameworks read env vars → traffic goes through the vault.Works with every agent framework. If your agent can make an HTTP call, it can use Keystore.
Tool-using agents and MCP servers with standard HTTP capabilities. 10,000+ registered MCP servers — all work with a single ks_ token.
Replace the entire .env file of secrets with one ks_ token in the system prompt. No more unencrypted credentials on disk.
Multi-agent crews share vault access through the same proxy. Each crew member gets its own ks_ token with independent budgets and audit trails.
Stateful agent workflows with vault-resolved credentials at every node. No env vars to configure — the prompt carries everything.
Break out of single-service lock-in. One ks_ token gives your agent access to OpenAI, Anthropic, and every other service simultaneously.
Python, TypeScript, Rust, Go — any language, any runtime. If your agent can make an HTTP call, it works with OpenClaw.
Built for the OWASP Top 10 for Agentic Applications. Opaque token handles for agent credentials — tokens that represent access without revealing the underlying secret.
The OWASP Top 10 for Agentic Applications (December 2025) recommends opaque token handles — credentials that represent access without revealing the underlying secret. ks_ tokens implement this pattern. With 73% of production AI deployments vulnerable to prompt injection, there's nothing meaningful to extract.
Set per-request, daily, monthly, or lifetime budgets on each agent. The proxy enforces it on every single request — when the cap is hit, the token stops working. One unified budget across all services.
Pause or revoke any agent token from the dashboard or CLI. All proxy requests are rejected with a 403 immediately — no propagation delay. Kill one token and the agent loses access to every service instantly.
Every request through the proxy is logged: agent ID, service, endpoint path, HTTP method, model, status code, response time, and cost. One audit stream for all agent activity across all services.
Rotate, revoke, and cap spend — without redeploying a single agent.
# Rotation — update a key once in the vault, every agent keeps working
$ keystore keys rotate --service openai
✓ Rotated OpenAI key for org acme-corp
✓ 14 agents using this service — zero redeployments needed
# Revocation — one token kill, all services cut off instantly
$ keystore agents pause agent_research_bot
✓ Agent paused — all proxy requests will return 403
# Budget — one unified cap across all services
$ keystore agents set-budget agent_research_bot --monthly 50.00
✓ Monthly budget set to $50.00
✓ Enforced on every proxy request — blocks when exhaustedUpdate a service key once in the vault. Every agent using that service continues working with zero redeployments.
Revoke one token. The agent loses access to every service instantly. No need to rotate ten individual keys.
One spending cap across all services. When the budget is exhausted, the token stops working.
The URL pattern is always proxy.keystore.io/v1/{slug}/{path}. 100+ built-in services, plus custom APIs registered in your org.
# URL pattern — always the same:
# https://proxy.keystore.io/v1/{service-slug}/{original-api-path}
# Built-in services (100+):
OpenAI: https://proxy.keystore.io/v1/openai/v1/chat/completions
Anthropic: https://proxy.keystore.io/v1/anthropic/v1/messages
Neon: https://proxy.keystore.io/v1/neon/projects
Vercel: https://proxy.keystore.io/v1/vercel/v13/deployments
Resend: https://proxy.keystore.io/v1/resend/emails
Tavily: https://proxy.keystore.io/v1/tavily/search
# Custom services registered in your org:
My Internal API:
Base URL: https://proxy.keystore.io/v1/my-internal-api
Example: POST https://proxy.keystore.io/v1/my-internal-api/data
# Auth styles — the proxy handles both automatically:
# Most services: Authorization: Bearer ks_TOKEN → Authorization: Bearer sk-real-key
# Anthropic: x-api-key: ks_TOKEN → x-api-key: sk-ant-real-key
# Custom: Configurable (bearer, header, or query param)Get started in under 5 minutes. Create an agent, get a ks_ token, paste the prompt snippet. No SDK required, no credit card needed.