Examples/Framework Integration: LangChain RAG Pipeline with setupEnv

Framework Integration: LangChain RAG Pipeline with setupEnv

intermediatesetupEnv·openaianthropic·3 min read

Framework Integration: LangChain RAG Pipeline with setupEnv

Frameworks like LangChain, CrewAI, and AutoGPT read API keys from environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.). setupEnv() rewrites these env vars to point at the Keystore vault — zero changes to your framework code.

What you'll build

A RAG (Retrieval-Augmented Generation) pipeline using LangChain with OpenAI embeddings and Anthropic Claude for generation. All credentials are resolved from the vault.

Prerequisites

  • A Keystore account with an agent token
  • OpenAI and Anthropic keys in your vault
  • Node.js 18+

Setup

1

Install dependencies

bash
1
npm install @keystore/sdk @langchain/openai @langchain/anthropic langchain
2

Call setupEnv before any framework code

typescript
1
2
3
4
5
6
import Keystore, { Providers } from "@keystore/sdk";

const ks = new Keystore({ agentToken: process.env.KS_TOKEN! });

// Rewrite OPENAI_BASE_URL, OPENAI_API_KEY, ANTHROPIC_BASE_URL, etc.
ks.setupEnv([Providers.OpenAI, Providers.Anthropic]);

After this call, process.env.OPENAI_BASE_URL points to vault.keystore.com/v1/openai and process.env.OPENAI_API_KEY is set to your agent token. LangChain reads these automatically.

i

Call setupEnv() before importing or initializing any framework modules that read env vars at import time.

3

Build the RAG pipeline

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import { OpenAIEmbeddings } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { Document } from "langchain/document";

// LangChain reads OPENAI_BASE_URL and OPENAI_API_KEY from env
const embeddings = new OpenAIEmbeddings({
  model: "text-embedding-3-small",
});

// LangChain reads ANTHROPIC_BASE_URL and ANTHROPIC_API_KEY from env
const llm = new ChatAnthropic({
  model: "claude-sonnet-4-20250514",
  maxTokens: 512,
});

Both clients are now routing through Keystore without any explicit configuration.

4

Index documents and query

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
// Sample documents
const docs = [
  new Document({
    pageContent: "Keystore encrypts all credentials with AES-256-GCM at rest.",
    metadata: { source: "security-docs" },
  }),
  new Document({
    pageContent: "Agent tokens use the ks_ prefix and are stored as SHA-256 hashes.",
    metadata: { source: "security-docs" },
  }),
  new Document({
    pageContent: "interceptAll() patches globalThis.fetch to route requests through the vault.",
    metadata: { source: "sdk-docs" },
  }),
];

// Create vector store with OpenAI embeddings (routed through Keystore)
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);

// Retrieve relevant documents
const query = "How does Keystore handle encryption?";
const relevantDocs = await vectorStore.similaritySearch(query, 2);

// Generate answer with Claude (routed through Keystore)
const context = relevantDocs.map((d) => d.pageContent).join("\n");
const response = await llm.invoke([
  {
    role: "system",
    content: `Answer based on this context:\n${context}`,
  },
  { role: "user", content: query },
]);

console.log(response.content);

What setupEnv() actually sets

For each provider, setupEnv() writes specific environment variables:

ProviderVariables set
openaiOPENAI_BASE_URL, OPENAI_API_KEY
anthropicANTHROPIC_BASE_URL, ANTHROPIC_API_KEY
resendRESEND_BASE_URL, RESEND_API_KEY
neonDATABASE_URL
s3AWS_ENDPOINT_URL_S3
!

setupEnv() overwrites existing env vars. If you need the original values for something else, read them before calling setupEnv().

Full example

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import Keystore, { Providers } from "@keystore/sdk";
import { OpenAIEmbeddings } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { Document } from "langchain/document";

// Initialize Keystore FIRST
const ks = new Keystore({ agentToken: process.env.KS_TOKEN! });
ks.setupEnv([Providers.OpenAI, Providers.Anthropic]);

// Now LangChain reads vault-pointed env vars automatically
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const llm = new ChatAnthropic({ model: "claude-sonnet-4-20250514", maxTokens: 512 });

async function main() {
  const docs = [
    new Document({ pageContent: "Keystore is a credential vault for AI agents." }),
    new Document({ pageContent: "Agents use scoped tokens instead of real API keys." }),
  ];

  const store = await MemoryVectorStore.fromDocuments(docs, embeddings);
  const results = await store.similaritySearch("What is Keystore?", 1);

  const answer = await llm.invoke([
    { role: "system", content: `Context: ${results[0].pageContent}` },
    { role: "user", content: "What is Keystore?" },
  ]);

  console.log(answer.content);
}

main();

Next steps