Framework Integration
Many AI frameworks -- LangChain, CrewAI, and others -- initialize provider clients by reading API keys from environment variables. Keystore's setupEnv method writes the correct proxy URLs and agent token into process.env, so these frameworks work without modification.
How It Works
When you call setupEnv, Keystore sets provider-specific environment variables that point at the proxy instead of the real provider API:
| Provider | Variables Set |
|---|---|
| OpenAI | OPENAI_BASE_URL, OPENAI_API_KEY |
| Anthropic | ANTHROPIC_BASE_URL, ANTHROPIC_API_KEY |
| Neon | DATABASE_URL |
| Resend | RESEND_BASE_URL, RESEND_API_KEY |
| Vercel | VERCEL_API_URL, VERCEL_TOKEN |
| S3 | AWS_ENDPOINT_URL_S3 |
Frameworks that read these standard variables will automatically route requests through the Keystore proxy. The proxy resolves the real credentials from the vault.
LangChain
LangChain's ChatOpenAI and ChatAnthropic classes read OPENAI_API_KEY / OPENAI_BASE_URL and ANTHROPIC_API_KEY from the environment.
TypeScript
import { Keystore } from "@keystore/sdk";
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
const ks = new Keystore({ agentToken: "ks_abc123..." });
ks.setupEnv(["openai", "anthropic"]);
// LangChain reads OPENAI_BASE_URL and OPENAI_API_KEY automatically.
const gpt = new ChatOpenAI({ model: "gpt-4o" });
const response = await gpt.invoke("What is the capital of France?");
console.log(response.content);
// Same for Anthropic.
const claude = new ChatAnthropic({ model: "claude-sonnet-4-20250514" });
const result = await claude.invoke("Explain monads in one sentence.");
console.log(result.content);Python
from envclaw import Keystore, setup_env
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
ks = Keystore(agent_token="ks_abc123...")
setup_env(ks, providers=["openai", "anthropic"])
# LangChain reads the env vars automatically.
gpt = ChatOpenAI(model="gpt-4o")
response = gpt.invoke("What is the capital of France?")
print(response.content)
claude = ChatAnthropic(model="claude-sonnet-4-20250514")
result = claude.invoke("Explain monads in one sentence.")
print(result.content)LangChain Chains and Agents
setupEnv works with any LangChain component that uses the underlying chat models:
import { Keystore } from "@keystore/sdk";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const ks = new Keystore({ agentToken: "ks_abc123..." });
ks.setupEnv(["openai"]);
const prompt = PromptTemplate.fromTemplate("Summarize: {text}");
const model = new ChatOpenAI({ model: "gpt-4o" });
const chain = prompt.pipe(model).pipe(new StringOutputParser());
const summary = await chain.invoke({ text: "Keystore is a key vault for AI agents..." });CrewAI
CrewAI reads OPENAI_API_KEY for its default LLM. Set up the environment before creating your crew:
from envclaw import Keystore, setup_env
from crewai import Agent, Task, Crew
ks = Keystore(agent_token="ks_abc123...")
setup_env(ks, providers=["openai"])
researcher = Agent(
role="Researcher",
goal="Find information about a topic",
backstory="You are a research assistant.",
llm="gpt-4o",
)
task = Task(
description="Research the history of cryptographic key management.",
expected_output="A brief summary of key management history.",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)To use Anthropic models with CrewAI, include the anthropic provider:
setup_env(ks, providers=["openai", "anthropic"])
agent = Agent(
role="Writer",
goal="Write concise technical content",
backstory="You are a technical writer.",
llm="claude-sonnet-4-20250514",
)Alternative: interceptAll
If a framework does not read base URL environment variables (only API keys), use interceptAll instead. This patches globalThis.fetch at the network level, so all outbound requests to provider domains are rerouted.
TypeScript
import { Keystore } from "@keystore/sdk";
const ks = new Keystore({ agentToken: "ks_abc123..." });
ks.interceptAll(["openai", "anthropic"]);
// Any framework or SDK that makes fetch requests to OpenAI or Anthropic
// will be transparently routed through the proxy.Python
from envclaw import Keystore, intercept_all
ks = Keystore(agent_token="ks_abc123...")
intercept_all(ks, providers=["openai", "anthropic"])
# All HTTP requests to provider domains are now routed through Keystore.Multi-Provider Agents
Agents often need access to multiple providers. Pass all required providers to setupEnv:
import { Keystore } from "@keystore/sdk";
const ks = new Keystore({ agentToken: "ks_abc123..." });
ks.setupEnv(["openai", "anthropic", "neon"]);
// Now available in process.env:
// OPENAI_BASE_URL, OPENAI_API_KEY
// ANTHROPIC_BASE_URL, ANTHROPIC_API_KEY
// DATABASE_URLThis is particularly useful for agents that use an LLM for reasoning and a database for persistence.
When to Use Which Approach
| Approach | Best For |
|---|---|
setupEnv | Frameworks that read env vars (LangChain, CrewAI, most ORMs) |
interceptAll | Frameworks that only read API keys, or when you cannot control client initialization |
wrap | Direct SDK usage where you instantiate the client yourself |