Overview
The Lava SDK includes pre-configured URLs for 26+ popular AI providers. These convenience URLs handle proper routing through Lava’s proxy while maintaining the native provider API format.
All provider integrations follow the same pattern:
- Generate a forward token
- Make the request to
lava.providers.[provider] with the token
- Lava forwards to the actual provider and tracks usage
The SDK includes these provider URLs out of the box:
| Provider | SDK Property | API Compatibility |
|---|
| OpenAI | lava.providers.openai | OpenAI-native |
| Anthropic | lava.providers.anthropic | Anthropic Messages API |
| Google Gemini | lava.providers.google | Gemini-native |
| Google (OpenAI-compatible) | lava.providers.googleOpenaiCompatible | OpenAI format |
| Mistral | lava.providers.mistral | OpenAI-compatible |
| DeepSeek | lava.providers.deepseek | OpenAI-compatible |
| xAI (Grok) | lava.providers.xai | OpenAI-compatible |
| Groq | lava.providers.groq | OpenAI-compatible |
| Together AI | lava.providers.together | OpenAI-compatible |
| Cohere | lava.providers.cohere | OpenAI-compatible |
| Hyperbolic | lava.providers.hyperbolic | OpenAI-compatible |
| SambaNova | lava.providers.sambanova | OpenAI-compatible |
| DeepInfra | lava.providers.deepinfra | OpenAI-compatible |
| Cerebras | lava.providers.cerebras | OpenAI-compatible |
| Fireworks | lava.providers.fireworks | OpenAI-compatible |
| Nebius | lava.providers.nebius | OpenAI-compatible |
| Inference.net | lava.providers.inference | OpenAI-compatible |
| Novita | lava.providers.novita | OpenAI-compatible |
| ElevenLabs | lava.providers.elevenlabs | ElevenLabs-native |
| Vercel AI | lava.providers.vercel | OpenAI-compatible |
| Kluster | lava.providers.kluster | OpenAI-compatible |
| Parasail | lava.providers.parasail | OpenAI-compatible |
| Targon | lava.providers.targon | OpenAI-compatible |
| GMI Cloud | lava.providers.gmicloud | OpenAI-compatible |
| Chutes | lava.providers.chutes | OpenAI-compatible |
| Baseten | lava.providers.baseten | OpenAI-compatible |
Integration Examples
OpenAI
The most common provider integration using OpenAI’s chat completions API.
import { Lava } from '@lavapayments/nodejs';
// Validate required environment variables
const secretKey = process.env.LAVA_SECRET_KEY;
if (!secretKey) {
throw new Error(
'LAVA_SECRET_KEY environment variable is required. ' +
'Get your key from https://lavapayments.com/dashboard'
);
}
const lava = new Lava(secretKey, {
apiVersion: '2025-04-28.v1'
});
// Generate forward token
const connectionSecret = process.env.CONNECTION_SECRET;
const productSecret = process.env.PRODUCT_SECRET;
if (!connectionSecret || !productSecret) {
throw new Error('CONNECTION_SECRET and PRODUCT_SECRET are required');
}
const forwardToken = lava.generateForwardToken({
connection_secret: connectionSecret,
product_secret: productSecret
});
// Make request to OpenAI via Lava
const response = await fetch(lava.providers.openai + '/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`
},
body: JSON.stringify({
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
],
temperature: 0.7
})
});
const data = await response.json();
console.log(data.choices[0].message.content);
// Check usage from response body
const tokensUsed = data.usage.total_tokens;
const requestId = response.headers.get('x-lava-request-id');
console.log(`Used ${tokensUsed} tokens, request ID: ${requestId}`);
Supported Endpoints:
/chat/completions - Chat API
/completions - Legacy completions
/embeddings - Text embeddings
/images/generations - DALL-E image generation
/audio/speech - Text-to-speech
/audio/transcriptions - Whisper transcription
Anthropic (Claude)
Anthropic uses their Messages API format (different from OpenAI).
// Assumes lava client is already initialized (see OpenAI example above)
const forwardToken = lava.generateForwardToken({
connection_secret: connectionSecret,
product_secret: productSecret
});
const response = await fetch(lava.providers.anthropic + '/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`,
'anthropic-version': '2023-06-01'
},
body: JSON.stringify({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
]
})
});
const data = await response.json();
console.log(data.content[0].text);
Supported Endpoints:
/messages - Claude Messages API
/messages/stream - Streaming responses
Google Gemini
Google provides both native Gemini API and OpenAI-compatible endpoints.
Option 1: Native Gemini API
// Assumes lava client is already initialized (see OpenAI example above)
const forwardToken = lava.generateForwardToken({
connection_secret: connectionSecret,
product_secret: productSecret
});
const response = await fetch(
lava.providers.google + '/models/gemini-2.0-flash-exp:generateContent',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`
},
body: JSON.stringify({
contents: [
{
parts: [
{ text: 'Explain quantum computing in simple terms.' }
]
}
]
})
}
);
const data = await response.json();
console.log(data.candidates[0].content.parts[0].text);
Option 2: OpenAI-Compatible Format
const response = await fetch(
lava.providers.googleOpenaiCompatible + '/chat/completions',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`
},
body: JSON.stringify({
model: 'gemini-2.0-flash-exp',
messages: [
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
]
})
}
);
const data = await response.json();
console.log(data.choices[0].message.content);
Use googleOpenaiCompatible if you’re already familiar with OpenAI’s API format. It makes switching between providers easier.
Custom Provider Integration
For providers not in the pre-configured list, use the forward endpoint directly:
// Assumes lava client is already initialized (see OpenAI example above)
const customProviderUrl = `${lava.baseUrl}forward?u=https://api.customprovider.com/v1`;
const forwardToken = lava.generateForwardToken({
connection_secret: connectionSecret,
product_secret: productSecret
});
const response = await fetch(customProviderUrl + '/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`
},
body: JSON.stringify({
// Provider-specific request format
})
});
The ?u= parameter tells Lava where to forward the request.
Provider Authentication Modes
Managed Keys (Default)
Lava manages the AI provider API keys and handles billing through customer wallets:
const forwardToken = lava.generateForwardToken({
connection_secret: 'conn_customer_wallet',
product_secret: 'prod_pricing_config'
});
Billing: Customer’s wallet is charged
BYOK (Bring Your Own Key)
Use your own AI provider API keys for direct provider billing:
// Your actual provider API key
const providerKey = process.env.OPENAI_API_KEY;
if (!providerKey) {
throw new Error('OPENAI_API_KEY environment variable is required for BYOK mode');
}
const forwardToken = lava.generateForwardToken({
connection_secret: null,
product_secret: null,
provider_key: providerKey
});
Billing: You are charged directly by the AI provider
Use case: Development, testing, or usage tracking without wallet billing
BYOK mode still tracks usage through Lava for analytics, but does not charge customer wallets.
Streaming Responses
All providers support streaming responses. The streaming format depends on the provider:
OpenAI-Compatible Streaming
const response = await fetch(lava.providers.openai + '/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`
},
body: JSON.stringify({
model: 'gpt-4o-mini',
messages: [
{ role: 'user', content: 'Write a story about a robot.' }
],
stream: true // Enable streaming
})
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
try {
while (true) {
const { done, value } = await reader!.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(line => line.trim() !== '');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}
}
}
} finally {
// Always release the reader lock to prevent memory leaks
reader?.releaseLock();
}
Anthropic Streaming
const response = await fetch(lava.providers.anthropic + '/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`,
'anthropic-version': '2023-06-01'
},
body: JSON.stringify({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Write a story about a robot.' }
],
stream: true
})
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
try {
while (true) {
const { done, value } = await reader!.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(line => line.trim() !== '');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
if (data.type === 'content_block_delta') {
process.stdout.write(data.delta.text);
}
}
}
}
} finally {
// Always release the reader lock to prevent memory leaks
reader?.releaseLock();
}
Add custom metadata to track requests by feature, user, or any other dimension:
const response = await fetch(lava.providers.openai + '/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`,
'X-Lava-Metadata-Feature': 'chat',
'X-Lava-Metadata-User-ID': 'user_123',
'X-Lava-Metadata-Session-ID': 'session_abc',
'X-Lava-Metadata-Environment': 'production'
},
body: JSON.stringify({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }]
})
});
// Later, filter requests by metadata
const requests = await lava.requests.list({
metadata_filters: {
feature: 'chat',
environment: 'production'
}
});
Error Handling
Lava passes through provider errors while adding context:
try {
const response = await fetch(lava.providers.openai + '/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${forwardToken}`
},
body: JSON.stringify({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }]
})
});
if (!response.ok) {
const error = await response.json();
console.error('Request failed:', error);
// Check Lava-specific errors
if (response.status === 402) {
console.error('Insufficient wallet balance');
// Prompt user to add funds
} else if (response.status === 401) {
console.error('Invalid forward token');
} else {
// Provider-specific error
console.error('Provider error:', error);
}
}
const data = await response.json();
} catch (error) {
console.error('Network error:', error);
}
Common Error Codes
| Status | Meaning | Solution |
|---|
| 401 | Invalid forward token | Regenerate token with valid secrets |
| 402 | Insufficient wallet balance | User needs to add funds |
| 429 | Rate limit exceeded | Implement backoff and retry |
| 500 | Provider error | Check provider status page |
| 502 | Provider unavailable | Retry with exponential backoff |
Multi-Provider Routing
Route requests to different providers based on model or availability:
function getProviderForModel(model: string) {
if (model.startsWith('gpt-')) {
return lava.providers.openai;
} else if (model.startsWith('claude-')) {
return lava.providers.anthropic;
} else if (model.startsWith('gemini-')) {
return lava.providers.google;
} else if (model.startsWith('deepseek-')) {
return lava.providers.deepseek;
} else {
throw new Error(`Unknown model: ${model}`);
}
}
// Usage
const providerUrl = getProviderForModel('gpt-4o-mini');
const response = await fetch(providerUrl + '/chat/completions', {
// ... request config
});
Best Practices
1. Reuse Forward Tokens
Generate tokens per-session, not per-request:
// ✅ Good: Generate once per user session
const userToken = lava.generateForwardToken({
connection_secret: user.connectionSecret,
product_secret: productSecret // From validated env var
});
// Use for multiple requests
await makeRequest1(userToken);
await makeRequest2(userToken);
// ❌ Bad: Generate for every request
async function makeRequest() {
const token = lava.generateForwardToken({ /* ... */ });
// ...
}
2. Handle Streaming Properly
Always clean up streams:
const reader = response.body?.getReader();
try {
while (true) {
const { done, value } = await reader!.read();
if (done) break;
// Process chunk
}
} finally {
reader?.releaseLock();
}
Establish a metadata schema across your application:
const METADATA_SCHEMA = {
feature: 'chat' | 'search' | 'code-gen',
user_id: string,
session_id: string,
environment: 'dev' | 'staging' | 'production'
};
// Apply consistently
headers: {
'X-Lava-Metadata-Feature': 'chat',
'X-Lava-Metadata-User-ID': userId,
'X-Lava-Metadata-Session-ID': sessionId,
'X-Lava-Metadata-Environment': 'production'
}
4. Monitor Usage
Track usage per provider for cost optimization:
const usage = await lava.usage.retrieve({
start: monthStart.toISOString()
});
// Analyze which providers are most expensive
const requests = await lava.requests.list({ limit: 1000 });
const byProvider = requests.data.reduce((acc, req) => {
acc[req.provider] = (acc[req.provider] || 0) + parseFloat(req.total_request_cost);
return acc;
}, {} as Record<string, number>);
console.log('Cost by provider:', byProvider);