Store your own API keys or purchase pre-provisioned ones from the marketplace. Your agents get scoped access tokens — never real secrets.
104+ supported services
The problem
You're leaking all your keys via .env. Your agents keep saying “I don't have access” and “I need a key for that.”
Vault your keys. Ship keyless agents.
OPENAI_API_KEY=sk-proj-4f8b2c... ANTHROPIC_API_KEY=sk-ant-api03-7d9e... NEON_API_KEY=neon-kf82nd... RESEND_API_KEY=re_8f2k4n... STRIPE_SECRET_KEY=sk_live_51J3kd... TAVILY_API_KEY=tvly-a8f3n2...
KS_TOKEN=ks_a1b2c3d4e5f6...abcdef12345678
For agents
Every time your agent says “I need an API key for that” — that's a failed task. One ks_ token gives your agent scoped access to every service. No keys to manage, no access denied.
OpenAI, Anthropic, Stripe, Neon, Resend — your agent gets one ks_ token and never asks for credentials again.
The vault proxies every request. Your agent operates with an opaque token — real secrets never leave the vault.
Set per-agent spend limits and instant kill switches. The vault enforces policy on every request, automatically.
Integration methods
Choose the integration pattern that fits your architecture.
One line patches globalThis.fetch. Every request to OpenAI, Anthropic, Neon, Resend, or any supported service is routed through the vault — your agent code doesn't change.
Explore“We went from 6 exposed API keys to one opaque token. interceptAll() took 30 seconds to set up.”
Target a single client instance. wrap() rewrites its baseURL to the vault while leaving everything else untouched. Perfect for selective proxying.
Explore“wrap() gave us per-client control. We proxy LLM calls through Keystore but keep internal APIs direct.”
LangChain, CrewAI, AutoGPT — any framework that reads OPENAI_API_KEY from the environment. setupEnv() rewrites those vars to point at the vault. Zero framework code changes.
Explore“Our LangChain pipeline used 4 different API keys from env vars. setupEnv() replaced all of them with one vault call.”
Keystore's vault is optimized from the ground up for AI agent credential management.
interceptAll() patches fetch once. Every SDK call — OpenAI, Anthropic, Neon, S3 — routes through the vault automatically.
Set monthly spend limits, request rate caps, and instant kill switches per agent. The vault enforces policy on every request.
Every proxied request is logged: agent ID, service, path, status, latency, and estimated cost. Export for compliance.
We deleted 47 environment variables across 12 agents and replaced them with a single ks_ token. Keystore paid for itself on day one.
OpenClaw
Point your agent at keystore.io/agents.txt and give it a ks_ token. Works with MCP servers, AutoGPT, CrewAI, or any agent that makes HTTP calls.
# Point your agent at keystore.io/agents.txt
# That file contains everything your agent needs:
# - Proxy base URLs for every service
# - Authentication instructions
# - Error handling rules
# In your agent's system prompt, just add:
Read https://keystore.io/agents.txt for API configuration.
Use agent token: ks_YOUR_AGENT_TOKEN
# That's it. Your agent reads the file, gets the proxy URLs,
# and routes all API calls through the Keystore vault.
# No SDK. No env vars. Any language. Any framework.No SDK to install, no env vars to configure. If your agent can make an HTTP call, it works with OpenClaw.
OpenAI, Anthropic, Neon, Resend, Vercel, and every other service in the catalog. One token unlocks them all.
Same proxy, same AES-256-GCM encryption, same budget enforcement, same audit trail as the SDK methods.
Stop managing .env files and start shipping. Keystore gives your agents secure, scoped access to 100+ APIs with a single token.
Get Started Free