Skip to main content
LocalLast verified May 11, 2026

Ollama data retention policy

Ollama runs LLMs entirely on your local machine, so there is no remote retention period, no training on your prompts, and no policy URL beyond the open-source repo.

Quick policy snapshot

Default retention
None (runs locally)
Zero data retention available
Yes
Trains on API customer data by default
No

Ollama is a local runtime, not a service

Ollama is an open-source runtime that downloads pre-trained large language models and serves them through a local HTTP endpoint on localhost. There is no Ollama-operated inference cloud, so the "data retention" framing that applies to OpenAI, Anthropic, or Google does not apply in the same shape.

When an application like Meetily calls Ollama for a summary, the request goes to a process on the same machine. No prompts or completions cross the network boundary unless the surrounding application explicitly forwards them.

Retention, training, and ZDR

  • Retention: None by default. Ollama itself does not persist prompts or completions to disk beyond the running context.
  • Training: Ollama does not train models. It serves models that were trained elsewhere by their respective publishers (Meta, Mistral AI, Alibaba, etc.).
  • Zero data retention: Available by construction - inference is local, so off-device transmission is zero.

What still leaves your machine

Running Ollama does not automatically prevent every cloud round-trip. Things that may still touch the network include:

  • The initial model download (ollama pull <model>), which fetches weights from registry.ollama.ai over HTTPS.
  • Application-level telemetry from whatever calls Ollama (your responsibility to audit).
  • Any external integrations the surrounding app makes (cloud storage backups, CRM sync, etc.).

Why this is the preferred Meetily path

Meetily's transcription path is local-by-default. Pairing it with Ollama for summaries means the entire pipeline - audio capture, transcription, summary - stays on the device. This is the simplest answer to "does the summarization step have a retention policy I need to read?" because the answer becomes "no, there is no remote service in the loop."

For organizations subject to data-residency or processor-disclosure obligations, this path removes a class of compliance questions entirely. For everyone else, it is the lowest-friction way to get a private summary without managing API keys or reading vendor policies.

Last verified: May 11, 2026 . Policy source: Ollama policy

Frequently asked questions

Does Ollama retain my prompts or outputs?
No. Ollama is a local runtime - prompts, completions, and model weights stay on the machine running Ollama. There is no remote service in the loop, so there is no retention period to configure.
Does Ollama train models on my data?
No. Ollama does not train any models. It is a runtime that downloads pre-trained open-source models (Llama, Mistral, Qwen, etc.) from public sources and serves them locally. No prompts are sent anywhere by default.
Is zero data retention available with Ollama?
Yes, by construction. Because inference happens locally, no prompts or completions are transmitted off-device, which is the strongest form of zero data retention available.
Where is data stored when I use Ollama?
Model weights live in Ollama's local model directory on your machine. Prompts and completions are not persisted by Ollama itself beyond the running context - you control any logging in the application that calls Ollama.
How do I delete data from Ollama?
There is no remote dataset to delete. To remove models locally, run `ollama rm <model>`. Application-level logs (e.g., in Meetily) are managed by that application's own retention controls.
Is Ollama compliant with HIPAA, GDPR, or SOC 2?
Ollama itself is a local runtime, not a service, so compliance applies to the deployment around it (your machine, your network, your application). Running summaries entirely on-device with Ollama eliminates many cross-border and processor-disclosure requirements that arise with cloud LLM APIs.
How does Meetily handle Ollama?
Meetily transcription is always 100% local. When you choose Ollama as your summary provider in `/choose-summary-model`, summaries also run on-device against a model you control. Transcripts and summaries never leave your machine on this path.
What if I want zero retention but already use a cloud LLM?
Switch your summary model to Ollama for fully-local summarization. Meetily's transcription path is unchanged - only the summarization endpoint moves from a cloud API to a local Ollama endpoint.

Run summaries locally with Meetily + Ollama

Meetily transcription is always 100% local. Pair it with Ollama and your meeting summaries never leave your machine either. No retention, no training, no policy to read.