Framework Integration: LangChain RAG Pipeline with setupEnv
Framework Integration: LangChain RAG Pipeline with setupEnv
Frameworks like LangChain, CrewAI, and AutoGPT read API keys from environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.). setupEnv() rewrites these env vars to point at the Keystore vault — zero changes to your framework code.
What you'll build
A RAG (Retrieval-Augmented Generation) pipeline using LangChain with OpenAI embeddings and Anthropic Claude for generation. All credentials are resolved from the vault.
Prerequisites
- A Keystore account with an agent token
- OpenAI and Anthropic keys in your vault
- Node.js 18+
Setup
Install dependencies
npm install @keystore/sdk @langchain/openai @langchain/anthropic langchainCall setupEnv before any framework code
import Keystore, { Providers } from "@keystore/sdk";
const ks = new Keystore({ agentToken: process.env.KS_TOKEN! });
// Rewrite OPENAI_BASE_URL, OPENAI_API_KEY, ANTHROPIC_BASE_URL, etc.
ks.setupEnv([Providers.OpenAI, Providers.Anthropic]);After this call, process.env.OPENAI_BASE_URL points to vault.keystore.com/v1/openai and process.env.OPENAI_API_KEY is set to your agent token. LangChain reads these automatically.
Call setupEnv() before importing or initializing any framework modules that read env vars at import time.
Build the RAG pipeline
import { OpenAIEmbeddings } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { Document } from "langchain/document";
// LangChain reads OPENAI_BASE_URL and OPENAI_API_KEY from env
const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
// LangChain reads ANTHROPIC_BASE_URL and ANTHROPIC_API_KEY from env
const llm = new ChatAnthropic({
model: "claude-sonnet-4-20250514",
maxTokens: 512,
});Both clients are now routing through Keystore without any explicit configuration.
Index documents and query
// Sample documents
const docs = [
new Document({
pageContent: "Keystore encrypts all credentials with AES-256-GCM at rest.",
metadata: { source: "security-docs" },
}),
new Document({
pageContent: "Agent tokens use the ks_ prefix and are stored as SHA-256 hashes.",
metadata: { source: "security-docs" },
}),
new Document({
pageContent: "interceptAll() patches globalThis.fetch to route requests through the vault.",
metadata: { source: "sdk-docs" },
}),
];
// Create vector store with OpenAI embeddings (routed through Keystore)
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);
// Retrieve relevant documents
const query = "How does Keystore handle encryption?";
const relevantDocs = await vectorStore.similaritySearch(query, 2);
// Generate answer with Claude (routed through Keystore)
const context = relevantDocs.map((d) => d.pageContent).join("\n");
const response = await llm.invoke([
{
role: "system",
content: `Answer based on this context:\n${context}`,
},
{ role: "user", content: query },
]);
console.log(response.content);What setupEnv() actually sets
For each provider, setupEnv() writes specific environment variables:
| Provider | Variables set |
|---|---|
openai | OPENAI_BASE_URL, OPENAI_API_KEY |
anthropic | ANTHROPIC_BASE_URL, ANTHROPIC_API_KEY |
resend | RESEND_BASE_URL, RESEND_API_KEY |
neon | DATABASE_URL |
s3 | AWS_ENDPOINT_URL_S3 |
setupEnv() overwrites existing env vars. If you need the original values for something else, read them before calling setupEnv().
Full example
import Keystore, { Providers } from "@keystore/sdk";
import { OpenAIEmbeddings } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { Document } from "langchain/document";
// Initialize Keystore FIRST
const ks = new Keystore({ agentToken: process.env.KS_TOKEN! });
ks.setupEnv([Providers.OpenAI, Providers.Anthropic]);
// Now LangChain reads vault-pointed env vars automatically
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const llm = new ChatAnthropic({ model: "claude-sonnet-4-20250514", maxTokens: 512 });
async function main() {
const docs = [
new Document({ pageContent: "Keystore is a credential vault for AI agents." }),
new Document({ pageContent: "Agents use scoped tokens instead of real API keys." }),
];
const store = await MemoryVectorStore.fromDocuments(docs, embeddings);
const results = await store.similaritySearch("What is Keystore?", 1);
const answer = await llm.invoke([
{ role: "system", content: `Context: ${results[0].pageContent}` },
{ role: "user", content: "What is Keystore?" },
]);
console.log(answer.content);
}
main();Next steps
- Use interceptAll() if you prefer global fetch patching over env var rewriting
- Add database access with Neon Postgres
- Set up production controls with rate limits and webhooks