Build a Multi-Provider Agent in 5 Minutes with Keystore
Build a Multi-Provider Agent in 5 Minutes with Keystore
If you are building an AI agent in 2026, you are probably using more than one provider. OpenAI for fast completions, Anthropic for complex reasoning, maybe Google Gemini when you need high throughput. The economics vary wildly: GPT-5.2 costs $1.75 per million input tokens, Claude Opus 4.6 costs $5 per million input tokens, and the rate limits are just as divergent --- Anthropic allows roughly 5x fewer requests than OpenAI at equivalent spend levels.
Every one of these providers requires its own API key. If you are using LangChain, that means OPENAI_API_KEY, ANTHROPIC_API_KEY, and GOOGLE_API_KEY in your .env file. CrewAI is the same. AutoGPT is the same. Every agent framework treats credentials as environment variables that your application loads at startup, with no encryption, no rotation mechanism, and no audit trail.
GitGuardian found 23.8 million secrets leaked on GitHub in 2024. Among them, 46,441 OpenAI API keys were exposed every month. This tutorial shows a different approach: one Keystore token that provides access to all three providers, with encryption, budget controls, and per-request logging baked in.
Prerequisites
- Node.js 18+ installed
- A Keystore account (keystore.dev)
- API keys for OpenAI, Anthropic, and Google Gemini
Minute 1: Install and Initialize
mkdir multi-agent && cd multi-agent
npm init -y
npm install @keystore/sdk openai @anthropic-ai/sdk
npm install -g @keystore/cli
npx @keystore/cli loginMinute 2: Add Your Providers
Add all three providers. Each key is encrypted with AES-256-GCM before storage --- your plaintext keys never touch the database.
ks provider add openai --key sk-your-openai-key
ks provider add anthropic --key sk-ant-your-anthropic-key
ks provider add google-gemini --key your-gemini-keyThis is the last time you handle raw API keys. From here on, everything goes through the proxy.
Minute 3: Create a Budget-Controlled Token
Here is where Keystore diverges from the .env file approach. You are not just storing credentials --- you are defining an access policy:
ks token create \
--name multi-provider-agent \
--providers openai,anthropic,google-gemini \
--budget 30 \
--budget-period daily \
--rate-limit 120/minuteThe daily budget matters. In February 2026, a three-person startup saw their Gemini API bill spike from $180/month to $82,314 in 48 hours after their key was stolen --- a 46,000% increase. Google has not forgiven the charges. A $30 daily budget would have capped that damage at $30.
Save the token:
echo "KS_TOKEN=ks_..." > .envMinute 4: Write a Multi-Provider Agent
This agent routes tasks to the best provider based on complexity and cost. It uses OpenAI for fast, cheap completions and Anthropic for tasks requiring deeper reasoning.
import { Keystore } from "@keystore/sdk";
import "dotenv/config";
const ks = new Keystore({ token: process.env.KS_TOKEN! });
type Provider = "openai" | "anthropic" | "google-gemini";
interface TaskResult {
provider: Provider;
response: string;
estimatedCost: string;
}
async function routeTask(
task: string,
complexity: "low" | "high" | "throughput"
): Promise<TaskResult> {
if (complexity === "low") {
// GPT-5.2: $1.75/M input, $14/M output — best for simple tasks
const res = await ks.proxy("openai", {
path: "/v1/chat/completions",
method: "POST",
body: {
model: "gpt-5.2",
messages: [{ role: "user", content: task }],
max_tokens: 500,
},
});
return {
provider: "openai",
response: res.choices[0].message.content,
estimatedCost: "~$0.008",
};
}
if (complexity === "high") {
// Claude Opus 4.6: $5/M input, $25/M output — best for reasoning
const res = await ks.proxy("anthropic", {
path: "/v1/messages",
method: "POST",
body: {
model: "claude-opus-4-6-20260219",
max_tokens: 1024,
messages: [{ role: "user", content: task }],
},
});
return {
provider: "anthropic",
response: res.content[0].text,
estimatedCost: "~$0.03",
};
}
// Google Gemini: 4M TPM with no tier system — best for bulk work
const res = await ks.proxy("google-gemini", {
path: "/v1/models/gemini-2.5-pro:generateContent",
method: "POST",
body: {
contents: [{ parts: [{ text: task }] }],
},
});
return {
provider: "google-gemini",
response: res.candidates[0].content.parts[0].text,
estimatedCost: "~$0.005",
};
}
async function main() {
console.log("Running multi-provider agent...\n");
// Simple classification — send to OpenAI (cheap, fast)
const classify = await routeTask(
"Classify this support ticket as billing, technical, or general: " +
"'My invoice shows a charge I don't recognize'",
"low"
);
console.log(`[${classify.provider}] ${classify.estimatedCost}`);
console.log(`Result: ${classify.response}\n`);
// Complex analysis — send to Anthropic (better reasoning)
const analyze = await routeTask(
"Analyze the security implications of storing API keys in " +
"environment variables versus a credential vault with proxy-based " +
"decryption. Consider the 23.8M secrets leaked on GitHub in 2024.",
"high"
);
console.log(`[${analyze.provider}] ${analyze.estimatedCost}`);
console.log(`Result: ${analyze.response.substring(0, 200)}...\n`);
// Bulk processing — send to Gemini (highest throughput)
const summarize = await routeTask(
"Summarize the key points of OWASP's recommendations for " +
"AI agent credential management in three bullet points.",
"throughput"
);
console.log(`[${summarize.provider}] ${summarize.estimatedCost}`);
console.log(`Result: ${summarize.response}\n`);
}
main();No API keys in this code. No OPENAI_API_KEY, no ANTHROPIC_API_KEY, no GOOGLE_API_KEY. One ks_ token accesses all three providers through the proxy.
Minute 5: Run and Monitor
npx tsx agent.tsCheck what happened:
ks logs --token multi-provider-agent --last 10TIME PROVIDER ENDPOINT STATUS COST
2026-02-19 15:01:02 openai /v1/chat/completions 200 $0.008
2026-02-19 15:01:04 anthropic /v1/messages 200 $0.031
2026-02-19 15:01:05 google-gemini /v1/models/gemini-2.5-pro:generate.. 200 $0.005Three providers. Three requests. One token. Full audit trail. Total spend: $0.044 against your $30 daily budget.
Why Budget Controls Matter at These Price Points
The pricing differences across providers make budget controls essential, not optional. Consider the rate limits:
- OpenAI Tier 1 allows 1,000 requests per minute and 500,000 tokens per minute.
- Anthropic provides roughly 5x fewer requests than OpenAI at equivalent spend levels.
- Google Gemini offers 4 million tokens per minute with no tier system --- the highest immediately-available throughput of any major provider.
An agent that hits Gemini's throughput ceiling at its full token rate will burn through budget far faster than one constrained by Anthropic's rate limits. Without per-token budget enforcement, you are relying on provider-side billing alerts that arrive after the damage is done.
According to recent industry surveys, 73% of development teams lack real-time cost tracking for autonomous agents. They find out what their agents spent when the invoice arrives. With Keystore, the budget is enforced at the proxy level --- the agent is cut off before it exceeds the limit, not notified after the fact.
What You Built
In five minutes, you created a multi-provider agent with:
- Three provider integrations through a single
ks_token - Cost-optimized routing that matches task complexity to provider pricing
- A $30 daily budget enforced at the proxy, not just alerting
- Rate limiting at 120 requests per minute to prevent runaway behavior
- Full audit logging showing provider, endpoint, status, and cost per request
- Instant revocation --- one command to cut off all provider access
The agent framework landscape --- LangChain, CrewAI, AutoGPT --- gives you powerful orchestration tools. Keystore gives you the credential security and financial controls that those frameworks do not provide. They handle the "what should the agent do" problem. Keystore handles the "what is the agent allowed to spend" problem.