Intercept All Fetch Calls: Build a Multi-Provider Agent
Intercept All Fetch Calls: Build a Multi-Provider Agent
interceptAll() is the fastest way to connect any SDK to Keystore. It patches globalThis.fetch so every outgoing request to a supported provider is automatically routed through the vault — your agent code doesn't change at all.
This is the recommended starting point for most projects. You can always switch to wrap() or setupEnv() later.
What you'll build
A simple agent that calls both OpenAI and Anthropic, with all API keys resolved from the Keystore vault at request time. Your code never sees a real secret.
Prerequisites
- A Keystore account with an agent token (
ks_...) - OpenAI and Anthropic keys stored in your vault (BYOK or marketplace)
- Node.js 18+
Setup
Install dependencies
npm install @keystore/sdk openai @anthropic-ai/sdkInitialize Keystore and intercept
import Keystore from "@keystore/sdk";
import OpenAI from "openai";
import Anthropic from "@anthropic-ai/sdk";
const ks = new Keystore({ agentToken: process.env.KS_TOKEN! });
// Patch globalThis.fetch — all provider requests now route through the vault
ks.interceptAll();That's it. Every HTTP request to api.openai.com or api.anthropic.com is now intercepted and routed through vault.keystore.com, where your ks_ token is exchanged for the real API key.
Use SDKs as normal
// OpenAI — works exactly as documented, no config changes
const openai = new OpenAI();
const gptResponse = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Explain quantum computing in one sentence." }],
});
console.log("GPT-4o:", gptResponse.choices[0].message.content);
// Anthropic — same story
const claude = new Anthropic();
const claudeResponse = await claude.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 256,
messages: [{ role: "user", content: "Explain quantum computing in one sentence." }],
});
console.log("Claude:", claudeResponse.content[0].text);Clean up when done
// Restore the original fetch when your agent is finished
ks.restore();Calling ks.restore() removes the fetch patch. Any subsequent requests go directly to providers (and will fail without real keys in env vars).
How it works under the hood
When interceptAll() is active:
- Your code calls
fetch("https://api.openai.com/v1/chat/completions", ...) - Keystore intercepts the request and rewrites it to
vault.keystore.com/v1/openai/chat/completions - The vault validates your
ks_token, checks budgets and rate limits - The real API key is decrypted (AES-256-GCM) and injected into the outgoing request
- The provider sees a normal, authenticated request
The interception is provider-aware. Requests to non-provider domains (your own APIs, databases, etc.) pass through untouched.
Full example
import Keystore from "@keystore/sdk";
import OpenAI from "openai";
import Anthropic from "@anthropic-ai/sdk";
async function main() {
const ks = new Keystore({ agentToken: process.env.KS_TOKEN! });
ks.interceptAll();
const openai = new OpenAI();
const claude = new Anthropic();
// Ask both models the same question
const question = "What is the most important unsolved problem in physics?";
const [gpt, anthropic] = await Promise.all([
openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: question }],
}),
claude.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 512,
messages: [{ role: "user", content: question }],
}),
]);
console.log("GPT-4o:", gpt.choices[0].message.content);
console.log("Claude:", anthropic.content[0].text);
ks.restore();
}
main();Next steps
- Add budget controls to limit spend per agent
- Try wrap() for per-client control instead of global fetch patching
- Explore setupEnv() for framework integrations like LangChain