Docs/Guides/Framework Integration

Framework Integration

Use Keystore with LangChain, CrewAI, and other AI frameworks that read provider credentials from environment variables.


Framework Integration

Many AI frameworks -- LangChain, CrewAI, and others -- initialize provider clients by reading API keys from environment variables. Keystore's setupEnv method writes the correct proxy URLs and agent token into process.env, so these frameworks work without modification.

How It Works

When you call setupEnv, Keystore sets provider-specific environment variables that point at the proxy instead of the real provider API:

ProviderVariables Set
OpenAIOPENAI_BASE_URL, OPENAI_API_KEY
AnthropicANTHROPIC_BASE_URL, ANTHROPIC_API_KEY
NeonDATABASE_URL
ResendRESEND_BASE_URL, RESEND_API_KEY
VercelVERCEL_API_URL, VERCEL_TOKEN
S3AWS_ENDPOINT_URL_S3

Frameworks that read these standard variables will automatically route requests through the Keystore proxy. The proxy resolves the real credentials from the vault.

LangChain

LangChain's ChatOpenAI and ChatAnthropic classes read OPENAI_API_KEY / OPENAI_BASE_URL and ANTHROPIC_API_KEY from the environment.

TypeScript

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import { Keystore } from "@keystore/sdk";
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";

const ks = new Keystore({ agentToken: "ks_abc123..." });
ks.setupEnv(["openai", "anthropic"]);

// LangChain reads OPENAI_BASE_URL and OPENAI_API_KEY automatically.
const gpt = new ChatOpenAI({ model: "gpt-4o" });
const response = await gpt.invoke("What is the capital of France?");
console.log(response.content);

// Same for Anthropic.
const claude = new ChatAnthropic({ model: "claude-sonnet-4-20250514" });
const result = await claude.invoke("Explain monads in one sentence.");
console.log(result.content);

Python

python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from envclaw import Keystore, setup_env
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

ks = Keystore(agent_token="ks_abc123...")
setup_env(ks, providers=["openai", "anthropic"])

# LangChain reads the env vars automatically.
gpt = ChatOpenAI(model="gpt-4o")
response = gpt.invoke("What is the capital of France?")
print(response.content)

claude = ChatAnthropic(model="claude-sonnet-4-20250514")
result = claude.invoke("Explain monads in one sentence.")
print(result.content)

LangChain Chains and Agents

setupEnv works with any LangChain component that uses the underlying chat models:

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
import { Keystore } from "@keystore/sdk";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const ks = new Keystore({ agentToken: "ks_abc123..." });
ks.setupEnv(["openai"]);

const prompt = PromptTemplate.fromTemplate("Summarize: {text}");
const model = new ChatOpenAI({ model: "gpt-4o" });

const chain = prompt.pipe(model).pipe(new StringOutputParser());
const summary = await chain.invoke({ text: "Keystore is a key vault for AI agents..." });

CrewAI

CrewAI reads OPENAI_API_KEY for its default LLM. Set up the environment before creating your crew:

python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
from envclaw import Keystore, setup_env
from crewai import Agent, Task, Crew

ks = Keystore(agent_token="ks_abc123...")
setup_env(ks, providers=["openai"])

researcher = Agent(
    role="Researcher",
    goal="Find information about a topic",
    backstory="You are a research assistant.",
    llm="gpt-4o",
)

task = Task(
    description="Research the history of cryptographic key management.",
    expected_output="A brief summary of key management history.",
    agent=researcher,
)

crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)

To use Anthropic models with CrewAI, include the anthropic provider:

python
1
2
3
4
5
6
7
8
setup_env(ks, providers=["openai", "anthropic"])

agent = Agent(
    role="Writer",
    goal="Write concise technical content",
    backstory="You are a technical writer.",
    llm="claude-sonnet-4-20250514",
)

Alternative: interceptAll

If a framework does not read base URL environment variables (only API keys), use interceptAll instead. This patches globalThis.fetch at the network level, so all outbound requests to provider domains are rerouted.

TypeScript

typescript
1
2
3
4
5
6
7
import { Keystore } from "@keystore/sdk";

const ks = new Keystore({ agentToken: "ks_abc123..." });
ks.interceptAll(["openai", "anthropic"]);

// Any framework or SDK that makes fetch requests to OpenAI or Anthropic
// will be transparently routed through the proxy.

Python

python
1
2
3
4
5
6
from envclaw import Keystore, intercept_all

ks = Keystore(agent_token="ks_abc123...")
intercept_all(ks, providers=["openai", "anthropic"])

# All HTTP requests to provider domains are now routed through Keystore.

Multi-Provider Agents

Agents often need access to multiple providers. Pass all required providers to setupEnv:

typescript
1
2
3
4
5
6
7
8
9
import { Keystore } from "@keystore/sdk";

const ks = new Keystore({ agentToken: "ks_abc123..." });
ks.setupEnv(["openai", "anthropic", "neon"]);

// Now available in process.env:
//   OPENAI_BASE_URL, OPENAI_API_KEY
//   ANTHROPIC_BASE_URL, ANTHROPIC_API_KEY
//   DATABASE_URL

This is particularly useful for agents that use an LLM for reasoning and a database for persistence.

When to Use Which Approach

ApproachBest For
setupEnvFrameworks that read env vars (LangChain, CrewAI, most ORMs)
interceptAllFrameworks that only read API keys, or when you cannot control client initialization
wrapDirect SDK usage where you instantiate the client yourself