Prompt-based integration

The last secret key your agent will ever need.

Paste a single ks_ token and proxy URLs into your agent's system prompt. The Keystore proxy resolves real API keys from the vault at request time. No SDK. No env vars. No framework plugin. One token, 100+ services.

The Problem

The MCP ecosystem has reached 97 million monthly SDK downloads and over 10,000 registered servers. Every framework converges on the same broken pattern: dump real API keys into environment variables.

.env
1
2
3
4
5
6
7
8
9
10
# Before: secret sprawl  every key exposed to the agent runtime
OPENAI_API_KEY=sk-proj-4f8b2c...
ANTHROPIC_API_KEY=sk-ant-api03-7d9e...
NEON_API_KEY=neon-kf82nd...
RESEND_API_KEY=re_8f2k4n...
STRIPE_SECRET_KEY=sk_live_51J3kd...
TAVILY_API_KEY=tvly-a8f3n2...

# After: one opaque token  real keys never leave the vault
KS_TOKEN=ks_a1b2c3d4e5f6789012345678901234567890abcdef1234567890abcdef12345678

LangChain / LangGraph

Expects OPENAI_API_KEY, ANTHROPIC_API_KEY in the environment. The LangGrinch vulnerability (CVE-2025-68664, CVSS 9.3) showed a prompt injection could extract every environment variable from a LangChain process.

CrewAI

Same env var pattern, with an extra wrinkle: known bugs demand OPENAI_API_KEY even when you've configured a completely different service. Your Anthropic-only crew still needs an OpenAI key in the environment.

AutoGPT

Stores everything in .env files. No encryption at rest, no rotation mechanism, no access control beyond filesystem permissions. The security model is "hope nobody reads the file."

OpenAI Agents SDK

Single-service by design. If your agent needs to call Anthropic and OpenAI in the same workflow, you're stitching together two different credential management approaches manually.

97M
Monthly MCP SDK downloads

The agent ecosystem is massive — but none of them have standardized how agents handle credentials.

73%
Vulnerable to prompt injection

Of production AI deployments, per Obsidian Security. If real keys are in the context, they will be extracted.

$47K–$82K
Per-incident cost

Recursive agent loops and stolen keys make headlines. Every agent is a new attack surface.

How It Works

Three steps to prompt-based integration. No SDK to install, no environment variables to configure, no framework plugin to wire up.

01

Create an agent and assign services

In the Keystore dashboard, create an agent and select the services it needs — OpenAI, Anthropic, Resend, Neon, or any of 100+ others. Set a monthly budget and rate limits. Keystore generates a single ks_ token: 64 hex characters, stored as a SHA-256 hash.

02

Paste the prompt snippet into your agent

Copy the system prompt block into your agent's instructions. It contains the proxy base URLs for each service and the agent's ks_ token. No SDK to install, no .env file to manage. Works with MCP servers, AutoGPT, CrewAI, LangGraph, OpenAI Agents SDK, or a plain LLM with tool-use.

03

Agent makes HTTP calls — proxy resolves real credentials

When the agent calls a service URL, the request hits the Keystore proxy. The proxy validates the ks_ token, checks the agent's status, budget, and rate limits, then decrypts the real API key from the vault (AES-256-GCM) and injects it into the outgoing request. The agent never sees a real key.

The Prompt

This is the complete system prompt snippet. Five services configured, error handling included, one token for everything.

paste into your agent's system prompt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
You have access to the following API services. Use these base URLs and
include the Authorization header with every request.

=== API Configuration ===

Authorization header (use for ALL requests):
  Authorization: Bearer ks_YOUR_AGENT_TOKEN

OpenAI (chat completions, embeddings, image generation):
  Base URL: https://proxy.keystore.io/v1/openai
  Example: POST https://proxy.keystore.io/v1/openai/v1/chat/completions

Anthropic (Claude messages):
  Base URL: https://proxy.keystore.io/v1/anthropic
  Header: x-api-key: ks_YOUR_AGENT_TOKEN
  Header: anthropic-version: 2023-06-01
  Example: POST https://proxy.keystore.io/v1/anthropic/v1/messages

Resend (transactional email):
  Base URL: https://proxy.keystore.io/v1/resend
  Example: POST https://proxy.keystore.io/v1/resend/emails

Neon (serverless Postgres management):
  Base URL: https://proxy.keystore.io/v1/neon
  Example: GET https://proxy.keystore.io/v1/neon/projects

Vercel (deployments, domains, environment variables):
  Base URL: https://proxy.keystore.io/v1/vercel
  Example: GET https://proxy.keystore.io/v1/vercel/v13/deployments

=== Rules ===

- Always use the base URLs above. Never attempt to call service APIs directly.
- Always include the Authorization header (or x-api-key for Anthropic).
- If you receive a 429 status, wait 60 seconds before retrying.
- If you receive a 403 status, the agent token may be paused or revoked.
- Do not attempt to modify or decode the ks_ token.

Under the Hood — 18-Step Proxy Pipeline

When your agent makes a request, it hits the Keystore proxy on Fly.io. Here's every step before the service sees the request.

what-happens.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# What happens when your agent calls OpenAI through the proxy:

curl -X POST https://proxy.keystore.io/v1/openai/v1/chat/completions \
  -H "Authorization: Bearer ks_a1b2c3d4e5f6..." \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'

# The Keystore proxy processes this in 18 steps:
#
#  1. Route resolution    map /v1/openai/* to api.openai.com
#  2. Token extraction   — pull ks_ token from Authorization header
#  3. Agent resolution   — SHA-256 hash → Redis cache → API fallback
#  4. Kill switch check  — reject if agent is paused or revoked
#  5. IP allowlist       — verify source IP if configured
#  6. Service check      — confirm agent has access to openai
#  7. Circuit breaker    — skip if upstream is failing, try fallback
#  8. Rate limit check   — enforce per-agent RPM and RPD limits
#  9. Budget check       — enforce monthly spending cap
# 10. Body size check    — reject oversized payloads
# 11. Abuse detection    — content safety scan
# 12. Credential lookup  — decrypt real key from vault (AES-256-GCM)
# 13. Header injection   — replace ks_ token with real API key
# 14. Forward request    — send to api.openai.com
# 15. Circuit breaker    — record success/failure
# 16. Stream response    — pipe back to agent
# 17. Async logging      — agent, service, path, status, cost, duration
# 18. Metering + alerts  — track spend, trigger budget alerts

Three Integration Paths

Same proxy. Same security. Pick your path. All three converge on the same AES-256-GCM encryption, budget enforcement, and audit trail.

OpenClaw

Prompt-based

Paste proxy URLs and a ks_ token into the system prompt. No code, no SDK, no env vars. Works with any agent that can make HTTP calls.

interceptAll()

SDK — zero-config

Install @keystore/sdk and call interceptAll(). It patches globalThis.fetch to route all outgoing requests through the proxy.

setupEnv()

SDK — frameworks

For LangChain, CrewAI, or anything that reads env vars. setupEnv() rewrites OPENAI_BASE_URL and others to route through the vault.

three-paths.ts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# OpenClaw works with the same proxy as our SDK.
# Three paths, same infrastructure, same security.

# Path 1: OpenClaw (prompt-based)  no code required
#   Paste proxy URLs + ks_ token into the system prompt.
#   Works with any agent that can make HTTP calls.

# Path 2: SDK interceptAll()  zero-config code integration
import Keystore from "@keystore/sdk"
const ks = new Keystore({ agentToken: "ks_..." })
ks.interceptAll()  // patches globalThis.fetch

# Path 3: SDK setupEnv()  for LangChain, CrewAI, frameworks
ks.setupEnv(["openai", "anthropic", "resend"])
# Rewrites OPENAI_BASE_URL, ANTHROPIC_BASE_URL, etc.
# Frameworks read env vars  traffic goes through the vault.

Compatibility

Works with every agent framework. If your agent can make an HTTP call, it can use Keystore.

Claude / MCP

Tool-using agents and MCP servers with standard HTTP capabilities. 10,000+ registered MCP servers — all work with a single ks_ token.

AutoGPT

Replace the entire .env file of secrets with one ks_ token in the system prompt. No more unencrypted credentials on disk.

CrewAI

Multi-agent crews share vault access through the same proxy. Each crew member gets its own ks_ token with independent budgets and audit trails.

LangGraph

Stateful agent workflows with vault-resolved credentials at every node. No env vars to configure — the prompt carries everything.

OpenAI Agents SDK

Break out of single-service lock-in. One ks_ token gives your agent access to OpenAI, Anthropic, and every other service simultaneously.

Custom agents

Python, TypeScript, Rust, Go — any language, any runtime. If your agent can make an HTTP call, it works with OpenClaw.

Security

Built for the OWASP Top 10 for Agentic Applications. Opaque token handles for agent credentials — tokens that represent access without revealing the underlying secret.

Opaque token handles

The OWASP Top 10 for Agentic Applications (December 2025) recommends opaque token handles — credentials that represent access without revealing the underlying secret. ks_ tokens implement this pattern. With 73% of production AI deployments vulnerable to prompt injection, there's nothing meaningful to extract.

Per-agent spending caps

Set per-request, daily, monthly, or lifetime budgets on each agent. The proxy enforces it on every single request — when the cap is hit, the token stops working. One unified budget across all services.

Instant kill switch

Pause or revoke any agent token from the dashboard or CLI. All proxy requests are rejected with a 403 immediately — no propagation delay. Kill one token and the agent loses access to every service instantly.

Full audit trail

Every request through the proxy is logged: agent ID, service, endpoint path, HTTP method, model, status code, response time, and cost. One audit stream for all agent activity across all services.

Operations

Rotate, revoke, and cap spend — without redeploying a single agent.

keystore-cli
1
2
3
4
5
6
7
8
9
10
11
12
13
# Rotation  update a key once in the vault, every agent keeps working
$ keystore keys rotate --service openai
 Rotated OpenAI key for org acme-corp
 14 agents using this service  zero redeployments needed

# Revocation  one token kill, all services cut off instantly
$ keystore agents pause agent_research_bot
 Agent paused  all proxy requests will return 403

# Budget  one unified cap across all services
$ keystore agents set-budget agent_research_bot --monthly 50.00
 Monthly budget set to $50.00
 Enforced on every proxy request  blocks when exhausted

Key rotation

Update a service key once in the vault. Every agent using that service continues working with zero redeployments.

Instant revocation

Revoke one token. The agent loses access to every service instantly. No need to rotate ten individual keys.

Unified budget

One spending cap across all services. When the budget is exhausted, the token stops working.

Extensibility

The URL pattern is always proxy.keystore.io/v1/{slug}/{path}. 100+ built-in services, plus custom APIs registered in your org.

adding-services.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# URL pattern  always the same:
# https://proxy.keystore.io/v1/{service-slug}/{original-api-path}

# Built-in services (100+):
OpenAI:     https://proxy.keystore.io/v1/openai/v1/chat/completions
Anthropic:  https://proxy.keystore.io/v1/anthropic/v1/messages
Neon:       https://proxy.keystore.io/v1/neon/projects
Vercel:     https://proxy.keystore.io/v1/vercel/v13/deployments
Resend:     https://proxy.keystore.io/v1/resend/emails
Tavily:     https://proxy.keystore.io/v1/tavily/search

# Custom services registered in your org:
My Internal API:
  Base URL: https://proxy.keystore.io/v1/my-internal-api
  Example: POST https://proxy.keystore.io/v1/my-internal-api/data

# Auth styles  the proxy handles both automatically:
# Most services:   Authorization: Bearer ks_TOKEN    Authorization: Bearer sk-real-key
# Anthropic:       x-api-key: ks_TOKEN               x-api-key: sk-ant-real-key
# Custom:          Configurable (bearer, header, or query param)

One token. Every service. Zero secrets exposed.

Get started in under 5 minutes. Create an agent, get a ks_ token, paste the prompt snippet. No SDK required, no credit card needed.