DocsModel Providers

MODEL_PROVIDERS

SETUP

Configure LLM providers for your agents. Switch between models with zero code changes.

SUPPORTED_PROVIDERS#

The framework is provider-agnostic. We support all major LLM providers through a unified interface. You can mix and match models from different providers within the same agent workflow.

OpenAI

gpt-4ogpt-3.5-turbo

Anthropic

claude-3-opusclaude-3-sonnet

Mistral

mistral-largemixtral-8x7b

Ollama

llama3phi3

PROVIDER_DETAILS#

OPENAI_(RECOMMENDED)

Best for general purpose reasoning, tool use, and JSON mode. Most reliable for complex agent workflows.

MODELS

  • gpt-4o - Latest flagship model
  • gpt-4-turbo - Fast and cost-effective
  • gpt-3.5-turbo - Budget option

FEATURES

  • Tool calling
  • JSON mode
  • 128k context
  • Vision support
openai-config.ts
import { OpenAIProvider } from '@akios/core'

const provider = new OpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4o',
  temperature: 0.7,
  maxTokens: 1000
})

// Use in agent
const agent = new Agent({
  name: 'OpenAI Assistant',
  provider,
  tools: [/* your tools */]
})

ANTHROPIC_(CLAUDE)

Excellent for writing, analysis, and large context windows. Best safety and reasoning capabilities.

MODELS

  • claude-3-opus - Most capable
  • claude-3-sonnet - Balanced
  • claude-3-haiku - Fast & cheap

FEATURES

  • Tool calling
  • 200k context
  • Vision support
  • Constitutional AI
anthropic-config.ts
import { AnthropicProvider } from '@akios/core'

const provider = new AnthropicProvider({
  apiKey: process.env.ANTHROPIC_API_KEY,
  model: 'claude-3-sonnet-20240229',
  temperature: 0.7,
  maxTokens: 4096
})

MISTRAL

Open-source models with strong performance. Good for European data residency requirements.

MODELS

  • mistral-large - Most capable
  • mistral-medium - Balanced
  • mistral-small - Fast inference

FEATURES

  • Tool calling
  • 32k context
  • Open source
  • EU hosting
mistral-config.ts
import { MistralProvider } from '@akios/core'

const provider = new MistralProvider({
  apiKey: process.env.MISTRAL_API_KEY,
  model: 'mistral-large-latest'
})

LOCAL_MODELS_(OLLAMA)

Run models locally for privacy, cost savings, and offline operation. Best for development and testing.

TOOL_USE_LIMITATIONS

Smaller local models (7B-13B parameters) often struggle with reliable JSON tool calling. Use larger models or robust system prompts.

SUPPORTED_MODELS

  • llama3 - Meta's latest
  • mistral - Fast inference
  • codellama - Code specialized
  • neural-chat - Intel optimized

REQUIREMENTS

  • 8GB+ RAM
  • Ollama installed
  • No API keys needed
  • Slower inference
setup-ollama.sh
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3

# Start Ollama service
ollama serve
ollama-config.ts
import { OllamaProvider } from '@akios/core'

const provider = new OllamaProvider({
  baseUrl: 'http://localhost:11434',
  model: 'llama3',
  // Lower temperature for more deterministic outputs
  temperature: 0.1
})

AZURE_OPENAI

For enterprise deployments requiring SLA guarantees, compliance certifications, and private networking.

BENEFITS

  • Enterprise SLAs
  • Enterprise security
  • Private networking
  • Azure integration

REQUIREMENTS

  • Azure subscription
  • OpenAI resource
  • Custom domain
  • Enterprise support
azure-config.ts
import { AzureOpenAIProvider } from '@akios/core'

const provider = new AzureOpenAIProvider({
  endpoint: process.env.AZURE_ENDPOINT,
  apiKey: process.env.AZURE_API_KEY,
  deploymentName: 'my-gpt4-deployment',
  apiVersion: '2023-12-01-preview'
})

PROVIDER_SWITCHING_&_FALLBACKS#

AKIOS supports automatic provider switching for reliability and cost optimization.

fallback-config.ts
import { FallbackProvider, OpenAIProvider, AnthropicProvider } from '@akios/core'

// Configure multiple providers
const providers = [
  new OpenAIProvider({
    apiKey: process.env.OPENAI_API_KEY,
    model: 'gpt-4o'
  }),
  new AnthropicProvider({
    apiKey: process.env.ANTHROPIC_API_KEY,
    model: 'claude-3-sonnet-20240229'
  })
]

// Create fallback provider
const provider = new FallbackProvider({
  providers,
  // Try OpenAI first, fallback to Anthropic on failure
  strategy: 'priority',
  // Retry failed requests up to 3 times
  maxRetries: 3
})

const agent = new Agent({
  name: 'Reliable Assistant',
  provider,
  tools: [/* your tools */]
})

COST_OPTIMIZATION

Use the CostOptimizedProvider to automatically route requests to the cheapest available model that meets your requirements.