Introducing Keystore: A Key Vault for AI Agents
Introducing Keystore: A Key Vault for AI Agents
GitGuardian's 2024 State of Secrets Sprawl report counted 23.8 million secrets leaked on public GitHub repositories --- a 25% year-over-year increase. Among those, 46,441 OpenAI API keys were exposed every month, a 1,212x increase compared to 2022. And here is the part that should keep you up at night: 70% of secrets leaked in 2022 are still valid today.
We are entering an era where non-human identities outnumber human ones 45:1 to 100:1 in enterprise environments, and AI agents are the fastest-growing category of non-human identity. Every major agent framework --- LangChain, CrewAI, AutoGPT, OpenAI's Agents SDK --- manages credentials through environment variables and .env files with zero encryption, zero rotation, and zero audit trail.
Today, we are launching Keystore: a credential vault and proxy purpose-built for AI agents.
The Gap No One Has Filled
The tooling landscape for AI infrastructure has evolved rapidly, but credential security for agents has been left behind. Here is what exists today:
Secret managers like HashiCorp Vault and AWS Secrets Manager were designed for traditional infrastructure. Vault recently shipped a dynamic secrets plugin for OpenAI, which is a step in the right direction, but the operational overhead of running Vault --- unsealing, HA configuration, audit backend management --- is prohibitive for teams that just want to give an agent safe access to GPT-4.
AI gateways like LiteLLM and Portkey have emerged as proxy layers for LLM traffic. LiteLLM offers virtual keys and model routing. Portkey adds an encrypted vault for credentials. These are useful tools, but they are gateways first and secret managers second. They do not enforce per-agent budgets, they do not provide kill switches scoped to individual agents, and they were not designed to be the single source of truth for every credential an agent touches.
Agentic IAM is a new category. Aembit is building identity and access management for non-human workloads, with a "secretless access" model. 1Password recently partnered with Browserbase to bring agentic AI features to their platform. Multifactor, a YC F25 company, raised a $15M seed round for "checkpoint links" that enable account-level sharing.
All of these address pieces of the problem. None of them combine a credential vault, a proxy that injects credentials at request time, per-agent budget enforcement, rate limiting, and instant revocation into a single product. That is the gap Keystore fills.
How Keystore Works
The core principle is simple: the agent never sees real credentials.
You store your API keys in the Keystore vault, where they are encrypted with AES-256-GCM --- the same AEAD cipher used exclusively by TLS 1.3, capable of 6.4 GB/s throughput with hardware acceleration. Each credential gets a unique initialization vector and is decrypted only inside the proxy at request time.
Your agent receives a scoped ks_ token. When the agent needs to call an API, the request flows through the Keystore proxy:
import { Keystore } from "@keystore/sdk";
const ks = new Keystore({ token: "ks_abc123..." });
// The agent never sees the real OpenAI key
const response = await ks.proxy("openai", {
path: "/v1/chat/completions",
method: "POST",
body: {
model: "gpt-4o",
messages: [{ role: "user", content: "Hello, world!" }],
},
});The proxy validates the token, checks the agent's budget and rate limits, decrypts the real credential, forwards the authenticated request, and returns the response. The real key never touches the agent's runtime.
This is the same proxy-based credential injection pattern used by AWS SigV4 Proxy and HashiCorp's Vault Agent Injector, adapted for the specific needs of AI agents.
What You Get
Per-agent budgets. The average cost of a data breach is $4.88 million according to IBM's 2024 Cost of a Data Breach Report. But the more common AI-specific financial risk is runaway spend. A single recursive agent loop ran for 11 days unnoticed and cost $47,000. A contract analysis agent made 47,000 API calls in 6 hours and racked up $1,410. Keystore lets you set dollar-amount budgets per agent token. When the budget is exhausted, the token stops working.
Rate limits. OpenAI enforces 1,000 RPM at Tier 1. Anthropic allows roughly 5x fewer requests than OpenAI at equivalent spend. Google Gemini offers 4 million TPM with no tier system. Keystore lets you set your own rate limits per token, so a misbehaving agent hits your guardrail before it hits the provider's.
Kill switches. Revoke any agent token instantly. No need to rotate the underlying API key --- just cut off the agent.
Audit logs. Every proxied request is logged with the token ID, provider, endpoint, timestamp, and cost.
Two credential models. Bring Your Own Keys if you have existing provider accounts, or use the Keystore Marketplace for instant access without individual sign-ups.
Why Now
The OWASP Top 10 for Agentic Applications, published in December 2025, explicitly recommends short-lived credentials, just-in-time access, and treating agents as untrusted third parties. The industry is converging on the understanding that agents need a different security model than traditional services.
Meanwhile, agent capabilities are accelerating. GPT-5.2 is priced at $1.75 per million input tokens and $14 per million output tokens. Costs are dropping, adoption is rising, and the number of agents in production is growing exponentially. The credential management problem is only going to get worse.
Keystore is the infrastructure to make it manageable. One vault. One proxy. One token per agent. Full control.
Sign up at keystore.dev to get started.