Skip to main content

Overview

Mistral AI provides state-of-the-art open and commercial large language models optimized for performance and efficiency. Known for their competitive pricing and strong performance across multiple benchmarks. Key Features:
  • Advanced reasoning and coding capabilities
  • Function calling and JSON mode support
  • Multilingual support (English, French, German, Spanish, Italian)
  • OpenAI-compatible API for easy integration
Official Documentation: Mistral AI Docs

Authentication

Mistral uses Bearer token authentication with the OpenAI-compatible endpoint format. Header:
Authorization: Bearer YOUR_MISTRAL_API_KEY
Lava Forward Token:
${LAVA_SECRET_KEY}.${CONNECTION_SECRET}.${PRODUCT_SECRET}
For BYOK (Bring Your Own Key):
${LAVA_SECRET_KEY}.${CONNECTION_SECRET}.${PRODUCT_SECRET}.${YOUR_MISTRAL_API_KEY}

ModelContextDescriptionUse Case
mistral-large-latest128KFlagship model with top-tier reasoningComplex analysis, coding, research
mistral-small-latest32KFast and cost-effectiveGeneral chat, simple tasks
codestral-latest32KSpecialized for code generationCoding, debugging, documentation
Pricing: See Mistral Pricing for current rates.

Quick Start Example

// 1. Set up your environment variables
const LAVA_FORWARD_TOKEN = process.env.LAVA_FORWARD_TOKEN;

// 2. Define the Mistral endpoint
const MISTRAL_ENDPOINT = 'https://api.mistral.ai/v1/chat/completions';

// 3. Make the request through Lava
const response = await fetch(
  `https://api.lavapayments.com/v1/forward?u=${encodeURIComponent(MISTRAL_ENDPOINT)}`,
  {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${LAVA_FORWARD_TOKEN}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      model: 'mistral-large-latest',
      messages: [
        {
          role: 'user',
          content: 'Explain quantum computing in simple terms.'
        }
      ],
      temperature: 0.7,
      max_tokens: 500
    })
  }
);

// 4. Parse response and extract usage
const data = await response.json();
console.log('Response:', data.choices[0].message.content);

// 5. Track usage (from response body)
const usage = data.usage;
console.log('Tokens used:', usage.total_tokens);

// 6. Get Lava request ID (from headers)
const requestId = response.headers.get('x-lava-request-id');
console.log('Lava Request ID:', requestId);

Available Endpoints

Mistral AI supports the OpenAI-compatible chat completions endpoint:
EndpointMethodDescription
/v1/chat/completionsPOSTText generation with conversation context
/v1/embeddingsPOSTGenerate text embeddings
/v1/modelsGETList available models

Usage Tracking

Usage data is returned in the response body (OpenAI format):
{
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 150,
    "total_tokens": 170
  }
}
Location: data.usage Format: Standard OpenAI usage object Lava Tracking: Automatically tracked via x-lava-request-id header

Features & Capabilities

Function Calling:
{
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather",
        "parameters": {
          "type": "object",
          "properties": {
            "location": { "type": "string" }
          }
        }
      }
    }
  ],
  "tool_choice": "auto"
}
JSON Mode:
{
  "response_format": { "type": "json_object" }
}
Streaming:
{
  "stream": true
}

BYOK Support

Status: ✅ Supported (managed keys + BYOK) BYOK Implementation:
  • Append your Mistral API key to the forward token: ${TOKEN}.${YOUR_MISTRAL_KEY}
  • Lava tracks usage and billing while you maintain key control
  • No additional Lava API key costs (metering-only mode available)
Getting a Mistral API Key:
  1. Sign up at Mistral AI Console
  2. Navigate to API Keys section
  3. Create a new API key
  4. Use in Lava forward token (4th segment)

Best Practices

  1. Model Selection: Use mistral-large-latest for complex reasoning, mistral-small-latest for speed
  2. Temperature: 0.7 for creative tasks, 0.1-0.3 for factual/deterministic outputs
  3. Context Management: Mistral Large supports 128K context - use for long documents
  4. Error Handling: Mistral returns OpenAI-compatible errors with descriptive messages
  5. Rate Limits: Monitor x-ratelimit-* headers in responses

Additional Resources