Provider Configuration
Configure summary model providers directly in the app -set up Ollama, OpenAI, Claude, Groq, OpenRouter, custom endpoints, and manage API keys and models.
Last updated: March 25, 2026
On this page
TL;DR
Meetily lets you configure your AI providers directly in the app -no config files or terminal commands needed. Browse provider cards, enter API keys, set endpoint URLs, and select models from a dropdown. Supports 8 providers: Built-in AI, Hosted AI, Claude, OpenAI, Groq, Ollama, OpenRouter, and Custom (OpenAI-compatible). Find it in Meeting Details > Preferences > Summary Model.
Provider Configuration
Meetily supports 8 AI providers for generating meeting summaries. Each provider has different strengths -Ollama runs locally for privacy, Claude and OpenAI offer powerful cloud models, Groq delivers ultra-fast inference, OpenRouter gives access to models from multiple providers, and Custom connects to any OpenAI-compatible server you control. This page covers summary model providers. For transcription engine settings, see Retranscription.
Think of it like setting up email accounts -you add a provider, enter credentials, and pick a model. Once configured, Meetily uses that provider whenever you generate a summary.
Configure providers in Meeting Details > Preferences > Summary Model.
Quick Start
- Open any meeting from the sidebar
- Click the Preferences icon (gear) in the top-right toolbar
- Select the Summary Model tab
- Browse the provider cards -click one to select it
- Click the Settings icon (gear) on the provider card to configure it
- Enter your API key, endpoint URL, or model selection
- Click Save -you're ready to generate summaries
Want to use a different AI model?
→ Open Preferences > Summary Model
→ Click a provider card
→ Click Settings (gear icon)
→ Enter credentials and pick a model
→ Save -done!Available Providers
Provider Cards
When you open the Summary Model tab, you see a grid of provider cards:
| Provider | Type | Configuration needed |
|---|---|---|
| Built-in AI | Local | None -works offline |
| Hosted AI | Cloud | None -pre-configured |
| Claude | Cloud | API key |
| OpenAI | Cloud | API key |
| Groq | Cloud | API key |
| Ollama | Local | Endpoint URL + model selection |
| OpenRouter | Cloud | API key |
| Custom | Any endpoint | Endpoint URL + model name + optional API key |
Click any card to select that provider. The selected card shows a checkmark and is highlighted in blue.
Provider Info
Hover over any provider card to see action buttons. Click the info icon (circle with "i") for a tooltip describing what the provider does and when to use it.
Configuring Each Provider
Built-in AI (Local)
Runs a small AI model directly inside Meetily. No setup required.
Setup:
- Click the Built-in AI provider card
- That's it -no configuration needed
The configuration modal shows an info box: "Offline AI runs locally, no API key required."
Best for quick summaries when you don't have Ollama or cloud access set up.
Hosted AI
A pre-configured cloud service provided by Meetily.
Setup:
- Click the Hosted AI provider card
- No configuration needed -it's pre-configured
Claude (Cloud)
Uses Anthropic's Claude models for high-quality, nuanced meeting summaries.
Setup:
- Click the Claude provider card
- Click Settings (gear)
- Enter your API Key in the password field:
- The field shows dots by default (like a password)
- Click the eye icon to reveal the key
- Click again to hide it
- Click Save
| Field | Required | Description |
|---|---|---|
| API Key | Yes | Your Anthropic API key |
Where to get an API key: Sign up at console.anthropic.com(opens in new tab) and create a key in the API Keys section.
OpenAI (Cloud)
Uses OpenAI's API for cloud-based summarization with models like GPT-4.
Setup:
- Click the OpenAI provider card
- Click Settings (gear)
- Enter your API Key in the password field:
- The field shows dots by default (like a password)
- Click the eye icon to reveal the key
- Click again to hide it
- Click Save
| Field | Required | Description |
|---|---|---|
| API Key | Yes | Your OpenAI API key (starts with sk-) |
Where to get an API key: Sign up at platform.openai.com(opens in new tab) and create a key in the API Keys section.
Groq (Cloud)
Ultra-fast inference powered by Groq's LPU hardware. Great for real-time summaries.
Setup:
- Click the Groq provider card
- Click Settings (gear)
- Enter your API Key in the password field
- Click Save
| Field | Required | Description |
|---|---|---|
| API Key | Yes | Your Groq API key |
Where to get an API key: Sign up at console.groq.com(opens in new tab) and create a key.
Ollama (Local)
Ollama runs large language models locally on your machine. No internet needed, no API key required.
Setup:
- Click the Ollama provider card to select it
- Click the Settings (gear) icon on the card
- The configuration modal opens with these fields:
| Field | Default | Description |
|---|---|---|
| Endpoint URL | http://localhost:11434 | Where your Ollama server is running |
| Model | (dropdown) | Select from available models on your server |
- If the endpoint is correct, click Refresh next to "Select Model"
- A spinner appears while loading models from your Ollama server
- Select a model from the dropdown (e.g.,
llama3.1,mistral) - Click Save
Install Ollama First
You need Ollama installed and running before configuring it in Meetily. Install from ollama.com(opens in new tab) and pull a model (ollama pull llama3.1) before opening the configuration modal.
OpenRouter (Cloud)
A single API key to access models from OpenAI, Anthropic, Meta, and more. Great for flexibility.
Setup:
- Click the OpenRouter provider card
- Click Settings (gear)
- Enter your API Key in the password field
- Click Save
| Field | Required | Description |
|---|---|---|
| API Key | Yes | Your OpenRouter API key |
Where to get an API key: Sign up at openrouter.ai(opens in new tab) and create a key in your account settings.
Custom (OpenAI-Compatible)
Connect to any server that implements the OpenAI Chat Completions API -LM Studio, vLLM, text-generation-webui, or your own deployment.
Setup:
- Click the Custom provider card
- Click Settings (gear)
- Configure these fields:
| Field | Required | Example | Description |
|---|---|---|---|
| API Endpoint | Yes | http://localhost:1234/v1 | Base URL of your server (include /v1) |
| Model Name | Yes | llama-3.1-8b-instruct | Exact model identifier as the server knows it |
| API Key | No | lm-studio | Only if your server requires authentication |
- Click Save
Include /v1 in the URL
Meetily appends /chat/completions to your endpoint URL. Most servers expect the full path to be /v1/chat/completions, so include /v1 at the end of your URL.
The Configuration Modal
When you click the Settings icon on a provider card, a configuration modal opens.
Modal Layout
| Section | What's there |
|---|---|
| Header | Provider name and icon |
| Fields | Provider-specific configuration inputs |
| Actions | Save and Close buttons |
API Key Field Behavior
For providers that need an API key (Claude, OpenAI, Groq, OpenRouter, Custom):
- Hidden by default -shows dots like a password field
- Eye icon -click to toggle visibility (show/hide the key)
- Autocomplete disabled -prevents browsers from auto-filling
- Placeholder text -shows
sk-...for OpenAI
Model Dropdown (Ollama)
For Ollama, the model dropdown works differently from other providers:
- The dropdown is initially empty or shows previously saved model
- Click Refresh to fetch available models from your Ollama server
- A spinner appears while loading: "Loading models..."
- Models appear in a scrollable dropdown list
- Select your preferred model
- The selection is saved when you click Save
If the Ollama server is unreachable, the refresh shows an error state.
Switching Providers
To switch your active provider:
- Open Preferences > Summary Model
- Click a different provider card
- The new provider is selected immediately (checkmark appears)
- If not yet configured, click Settings to set it up
- The next summary generation uses the new provider
Each provider's configuration is saved independently. Switching away from a provider and back restores all its settings -API keys, endpoints, and model selections are preserved.
How It Works
When you generate a summary, Meetily routes the request to your selected provider:
Transcript → Selected Provider → LLM Model → Summary
↓
┌────────────┼────────────────────────────┐
↓ ↓ ↓
Local Cloud APIs Custom
(Built-in, (Claude, OpenAI, Groq, (any OpenAI-
Ollama) OpenRouter, Hosted) compatible)- Transcript is prepared: The full meeting transcript is formatted as a prompt
- Template applied: Your selected summary template defines the output structure
- Provider called: The request goes to your chosen provider's API endpoint
- Response parsed: The LLM's response is parsed and displayed as the AI summary
- Saved locally: The summary is stored in your local database
Real-World Examples
Example 1: Setting up Ollama for fully local AI
"I want everything to run on my machine -no cloud, no internet, complete privacy."
Setup: Install Ollama → pull a model (ollama pull llama3.1) → open Meetily Preferences > Summary Model → select Ollama → Settings → verify endpoint is http://localhost:11434 → click Refresh → select llama3.1 → Save. All summaries now run locally.
Example 2: Adding OpenAI for higher quality summaries
"I want to use GPT-4 for the best possible meeting summaries."
Setup: Go to platform.openai.com and copy your API key → open Meetily Preferences > Summary Model → select OpenAI → Settings → paste your API key → Save. The next summary generation uses OpenAI's models.
Example 3: Connecting to LM Studio on your network
"I run LM Studio on a separate workstation with a powerful GPU. I want Meetily on my laptop to use it."
Setup: Start LM Studio's server on the workstation → open Meetily Preferences > Summary Model → select Custom → Settings → endpoint: http://192.168.1.50:1234/v1 → model: llama-3.1-8b-instruct → API key: leave empty → Save. Meetily sends transcripts to your workstation for summarization.
Example 4: Quick summaries with Built-in AI
"I'm in a coffee shop with no internet and Ollama isn't installed. I just want a basic summary."
Setup: Open Preferences > Summary Model → select Built-in AI → no setup needed → generate a summary. The built-in model produces a basic summary instantly, right on your device.
Screenshots Guide
These are the key screens worth capturing for visual reference:
| # | What to capture | Why it helps |
|---|---|---|
| 1 | Provider cards grid -showing all available providers with one selected (checkmark) | Shows the provider selection interface |
| 2 | Ollama config modal -endpoint field, Refresh button, model dropdown | Shows the most common local provider setup |
| 3 | OpenAI config modal -API key field with eye icon | Shows cloud provider setup |
| 4 | Custom config modal -endpoint, model name, and API key fields | Shows the full custom configuration |
| 5 | Built-in AI info box -showing "Offline AI runs locally" message | Shows the zero-config option |
| 6 | Model dropdown loading -spinner and "Loading models..." state | Shows what users see while models load |
| 7 | Provider info tooltip -info overlay describing a provider | Shows the inline help system |
| 8 | Selected vs unselected cards -side by side with checkmark highlight | Shows how selection works visually |