Provider Configuration

new

Configure summary model providers directly in the app -set up Ollama, OpenAI, Claude, Groq, OpenRouter, custom endpoints, and manage API keys and models.

Last updated: March 25, 2026

TL;DR

Meetily lets you configure your AI providers directly in the app -no config files or terminal commands needed. Browse provider cards, enter API keys, set endpoint URLs, and select models from a dropdown. Supports 8 providers: Built-in AI, Hosted AI, Claude, OpenAI, Groq, Ollama, OpenRouter, and Custom (OpenAI-compatible). Find it in Meeting Details > Preferences > Summary Model.

Provider Configuration

Meetily supports 8 AI providers for generating meeting summaries. Each provider has different strengths -Ollama runs locally for privacy, Claude and OpenAI offer powerful cloud models, Groq delivers ultra-fast inference, OpenRouter gives access to models from multiple providers, and Custom connects to any OpenAI-compatible server you control. This page covers summary model providers. For transcription engine settings, see Retranscription.

Think of it like setting up email accounts -you add a provider, enter credentials, and pick a model. Once configured, Meetily uses that provider whenever you generate a summary.

Configure providers in Meeting Details > Preferences > Summary Model.


Quick Start

  1. Open any meeting from the sidebar
  2. Click the Preferences icon (gear) in the top-right toolbar
  3. Select the Summary Model tab
  4. Browse the provider cards -click one to select it
  5. Click the Settings icon (gear) on the provider card to configure it
  6. Enter your API key, endpoint URL, or model selection
  7. Click Save -you're ready to generate summaries
Code language: text
Want to use a different AI model?
  → Open Preferences > Summary Model
  → Click a provider card
  → Click Settings (gear icon)
  → Enter credentials and pick a model
  → Save -done!

Available Providers

Provider Cards

When you open the Summary Model tab, you see a grid of provider cards:

ProviderTypeConfiguration needed
Built-in AILocalNone -works offline
Hosted AICloudNone -pre-configured
ClaudeCloudAPI key
OpenAICloudAPI key
GroqCloudAPI key
OllamaLocalEndpoint URL + model selection
OpenRouterCloudAPI key
CustomAny endpointEndpoint URL + model name + optional API key

Click any card to select that provider. The selected card shows a checkmark and is highlighted in blue.

Provider Info

Hover over any provider card to see action buttons. Click the info icon (circle with "i") for a tooltip describing what the provider does and when to use it.


Configuring Each Provider

Built-in AI (Local)

Runs a small AI model directly inside Meetily. No setup required.

Setup:

  1. Click the Built-in AI provider card
  2. That's it -no configuration needed

The configuration modal shows an info box: "Offline AI runs locally, no API key required."

Best for quick summaries when you don't have Ollama or cloud access set up.

Hosted AI

A pre-configured cloud service provided by Meetily.

Setup:

  1. Click the Hosted AI provider card
  2. No configuration needed -it's pre-configured

Claude (Cloud)

Uses Anthropic's Claude models for high-quality, nuanced meeting summaries.

Setup:

  1. Click the Claude provider card
  2. Click Settings (gear)
  3. Enter your API Key in the password field:
    • The field shows dots by default (like a password)
    • Click the eye icon to reveal the key
    • Click again to hide it
  4. Click Save
FieldRequiredDescription
API KeyYesYour Anthropic API key

Where to get an API key: Sign up at console.anthropic.com(opens in new tab) and create a key in the API Keys section.

OpenAI (Cloud)

Uses OpenAI's API for cloud-based summarization with models like GPT-4.

Setup:

  1. Click the OpenAI provider card
  2. Click Settings (gear)
  3. Enter your API Key in the password field:
    • The field shows dots by default (like a password)
    • Click the eye icon to reveal the key
    • Click again to hide it
  4. Click Save
FieldRequiredDescription
API KeyYesYour OpenAI API key (starts with sk-)

Where to get an API key: Sign up at platform.openai.com(opens in new tab) and create a key in the API Keys section.

Groq (Cloud)

Ultra-fast inference powered by Groq's LPU hardware. Great for real-time summaries.

Setup:

  1. Click the Groq provider card
  2. Click Settings (gear)
  3. Enter your API Key in the password field
  4. Click Save
FieldRequiredDescription
API KeyYesYour Groq API key

Where to get an API key: Sign up at console.groq.com(opens in new tab) and create a key.

Ollama (Local)

Ollama runs large language models locally on your machine. No internet needed, no API key required.

Setup:

  1. Click the Ollama provider card to select it
  2. Click the Settings (gear) icon on the card
  3. The configuration modal opens with these fields:
FieldDefaultDescription
Endpoint URLhttp://localhost:11434Where your Ollama server is running
Model(dropdown)Select from available models on your server
  1. If the endpoint is correct, click Refresh next to "Select Model"
  2. A spinner appears while loading models from your Ollama server
  3. Select a model from the dropdown (e.g., llama3.1, mistral)
  4. Click Save

Install Ollama First

You need Ollama installed and running before configuring it in Meetily. Install from ollama.com(opens in new tab) and pull a model (ollama pull llama3.1) before opening the configuration modal.

OpenRouter (Cloud)

A single API key to access models from OpenAI, Anthropic, Meta, and more. Great for flexibility.

Setup:

  1. Click the OpenRouter provider card
  2. Click Settings (gear)
  3. Enter your API Key in the password field
  4. Click Save
FieldRequiredDescription
API KeyYesYour OpenRouter API key

Where to get an API key: Sign up at openrouter.ai(opens in new tab) and create a key in your account settings.

Custom (OpenAI-Compatible)

Connect to any server that implements the OpenAI Chat Completions API -LM Studio, vLLM, text-generation-webui, or your own deployment.

Setup:

  1. Click the Custom provider card
  2. Click Settings (gear)
  3. Configure these fields:
FieldRequiredExampleDescription
API EndpointYeshttp://localhost:1234/v1Base URL of your server (include /v1)
Model NameYesllama-3.1-8b-instructExact model identifier as the server knows it
API KeyNolm-studioOnly if your server requires authentication
  1. Click Save

Include /v1 in the URL

Meetily appends /chat/completions to your endpoint URL. Most servers expect the full path to be /v1/chat/completions, so include /v1 at the end of your URL.


The Configuration Modal

When you click the Settings icon on a provider card, a configuration modal opens.

SectionWhat's there
HeaderProvider name and icon
FieldsProvider-specific configuration inputs
ActionsSave and Close buttons

API Key Field Behavior

For providers that need an API key (Claude, OpenAI, Groq, OpenRouter, Custom):

  • Hidden by default -shows dots like a password field
  • Eye icon -click to toggle visibility (show/hide the key)
  • Autocomplete disabled -prevents browsers from auto-filling
  • Placeholder text -shows sk-... for OpenAI

Model Dropdown (Ollama)

For Ollama, the model dropdown works differently from other providers:

  1. The dropdown is initially empty or shows previously saved model
  2. Click Refresh to fetch available models from your Ollama server
  3. A spinner appears while loading: "Loading models..."
  4. Models appear in a scrollable dropdown list
  5. Select your preferred model
  6. The selection is saved when you click Save

If the Ollama server is unreachable, the refresh shows an error state.


Switching Providers

To switch your active provider:

  1. Open Preferences > Summary Model
  2. Click a different provider card
  3. The new provider is selected immediately (checkmark appears)
  4. If not yet configured, click Settings to set it up
  5. The next summary generation uses the new provider

Each provider's configuration is saved independently. Switching away from a provider and back restores all its settings -API keys, endpoints, and model selections are preserved.


How It Works

When you generate a summary, Meetily routes the request to your selected provider:

Code language: text
Transcript → Selected Provider → LLM Model → Summary

     ┌────────────┼────────────────────────────┐
     ↓            ↓                            ↓
  Local        Cloud APIs                  Custom
  (Built-in,   (Claude, OpenAI, Groq,      (any OpenAI-
   Ollama)      OpenRouter, Hosted)         compatible)
  1. Transcript is prepared: The full meeting transcript is formatted as a prompt
  2. Template applied: Your selected summary template defines the output structure
  3. Provider called: The request goes to your chosen provider's API endpoint
  4. Response parsed: The LLM's response is parsed and displayed as the AI summary
  5. Saved locally: The summary is stored in your local database

Real-World Examples

Example 1: Setting up Ollama for fully local AI

"I want everything to run on my machine -no cloud, no internet, complete privacy."

Setup: Install Ollama → pull a model (ollama pull llama3.1) → open Meetily Preferences > Summary Model → select OllamaSettings → verify endpoint is http://localhost:11434 → click Refresh → select llama3.1Save. All summaries now run locally.

Example 2: Adding OpenAI for higher quality summaries

"I want to use GPT-4 for the best possible meeting summaries."

Setup: Go to platform.openai.com and copy your API key → open Meetily Preferences > Summary Model → select OpenAISettings → paste your API key → Save. The next summary generation uses OpenAI's models.

Example 3: Connecting to LM Studio on your network

"I run LM Studio on a separate workstation with a powerful GPU. I want Meetily on my laptop to use it."

Setup: Start LM Studio's server on the workstation → open Meetily Preferences > Summary Model → select CustomSettings → endpoint: http://192.168.1.50:1234/v1 → model: llama-3.1-8b-instruct → API key: leave empty → Save. Meetily sends transcripts to your workstation for summarization.

Example 4: Quick summaries with Built-in AI

"I'm in a coffee shop with no internet and Ollama isn't installed. I just want a basic summary."

Setup: Open Preferences > Summary Model → select Built-in AI → no setup needed → generate a summary. The built-in model produces a basic summary instantly, right on your device.


Screenshots Guide

These are the key screens worth capturing for visual reference:

#What to captureWhy it helps
1Provider cards grid -showing all available providers with one selected (checkmark)Shows the provider selection interface
2Ollama config modal -endpoint field, Refresh button, model dropdownShows the most common local provider setup
3OpenAI config modal -API key field with eye iconShows cloud provider setup
4Custom config modal -endpoint, model name, and API key fieldsShows the full custom configuration
5Built-in AI info box -showing "Offline AI runs locally" messageShows the zero-config option
6Model dropdown loading -spinner and "Loading models..." stateShows what users see while models load
7Provider info tooltip -info overlay describing a providerShows the inline help system
8Selected vs unselected cards -side by side with checkmark highlightShows how selection works visually

Frequently Asked Questions

If privacy matters most, start with Ollama (local, free) or Built-in AI (no setup). If you want the best quality summaries and don't mind cloud, use Claude or OpenAI. For the fastest inference, try Groq. For access to many models through one key, use OpenRouter. If you run your own AI server, use Custom (OpenAI-Compatible).
Only for cloud providers (Claude, OpenAI, Groq, OpenRouter, Hosted AI). Local providers (Ollama, Built-in AI) work completely offline after initial setup. Custom depends on whether your server is local or remote.
API keys are stored locally on your machine in the app's database. They are never sent to Meetily's servers -only to the specific provider you configured. Keys are hidden by default in the UI.
The selected provider applies globally to all meetings. However, when retranscribing a specific meeting, you can choose a different provider for that one operation without changing your default.
The Refresh button will fail to load models, and summary generation will show an error. Make sure Ollama is running (check with 'ollama list' in terminal) before trying to generate summaries.
No. All 8 providers are available without a Pro license. Select any provider card and configure it to get started.
Yes. Each provider's settings are saved independently. Switch between providers freely -your API keys, endpoints, and model selections are preserved and restored when you switch back.
The most common issue is missing '/v1' at the end of the URL. Meetily appends '/chat/completions' to your URL, so the full path must be '/v1/chat/completions'. Also check that the server is running and the port is correct.
Open the Ollama configuration modal (click Settings on the Ollama card) and click the Refresh button next to the model dropdown. Meetily queries your Ollama server for available models and populates the dropdown.
For Ollama, models with 7B+ parameters work best -Llama 3.1 8B Instruct is a popular choice. Smaller models (3B and below) may produce shorter summaries. For OpenAI, GPT-4o gives the best results. For Claude, Sonnet 4.5 balances quality and speed. Groq is fastest with Llama 3.3 70B.

Ready to get started?

Download Meetily and start transcribing your meetings locally with full privacy.

Have questions? or join our GitHub community