Provider Configuration Setup

After deploying the application, you can configure LLM providers through the web interface.

Overview

Providers define which LLM backends (Ollama, Anthropic, OpenAI, Gemini, etc.) are available to the system. All provider configuration is stored in the PostgreSQL database.

Quick Start

1. Start the Application

docker-compose up -d

The web interface will be available at http://localhost:8081.

2. Access Provider Settings

  1. Open http://localhost:8081 in your browser
  2. Navigate to the Settings or Providers section (usually in the dashboard)
  3. Click Add Provider or Configure Provider

3. Add LLM Providers

Fill in the provider form with your chosen provider’s information:

For Local Runtime

Ollama

For Cloud Runtime

Choose one or more of these options:

Anthropic Claude

OpenAI

Google Gemini

4. Save and Test

After entering provider details, click Save or Test Connection to verify the configuration works.

Supported Platforms

Platform Local? Requires API Key? Notes
Ollama Yes No Local inference, requires Ollama running on machine
Anthropic No Yes Claude API from Anthropic
OpenAI No Yes GPT models from OpenAI
Google Gemini No Yes Gemini models from Google

Troubleshooting

“Provider connection failed”

“API key is invalid”

Model not found

Advanced: REST API (for Developers)

For programmatic provider configuration, you can use the REST API directly.

Create Provider via API

curl -X POST http://localhost:8080/providers \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "name": "claude-haiku",
    "platform": "anthropic",
    "model": "claude-haiku-4-5-20251001",
    "api_key": "sk-ant-...",
    "timeout": 120
  }'

List All Providers

curl http://localhost:8080/providers \
  -H "Authorization: Bearer YOUR_API_KEY"

Update Provider

curl -X PUT http://localhost:8080/providers/{id} \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"api_key": "sk-..."}'

Delete Provider

curl -X DELETE http://localhost:8080/providers/{id} \
  -H "Authorization: Bearer YOUR_API_KEY"

Assign Provider to Job

curl -X PUT http://localhost:8080/providers/jobs/frontal-cortex \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"provider_id": 1}'

Security Best Practices

Protecting API Keys

Network Security

Embedding Models

If you need to use embedding models (e.g., embeddinggemma from Ollama):

  1. Configure the embedding provider in the UI or via API
  2. Set the model name (e.g., embeddinggemma:latest)
  3. Set dimensions to match the model’s output (usually 768)
  4. Ensure the system is configured to use this provider for embeddings