Ultra-fast LLM inference
Access Groq's LPU-powered inference for the fastest token generation available. Perfect for latency-sensitive agent workloads.
Use Groq through Keystore with zero code changes. Keys are resolved from the vault and injected at request time.
import Keystore from "@keystore/sdk";
import OpenAI from "openai";
const ks = new Keystore({ agentToken: process.env.KS_TOKEN! });
ks.interceptAll();
// Groq uses an OpenAI-compatible API
const groq = new OpenAI({
baseURL: "https://api.groq.com/openai/v1",
});
const completion = await groq.chat.completions.create({
model: "llama-3.1-70b-versatile",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message.content);Request access and our concierge team will provision credentials for you — usually within 24 hours. No setup on your end.
Request Access