Azure OpenAI's training default
Microsoft's Azure OpenAI data privacy documentation is unusually clear-cut for a cloud LLM service. Customer prompts, completions, embeddings, and uploaded training data are:
- Not available to other Azure customers.
- Not available to OpenAI or other Azure Direct Model providers.
- Not used by model providers to improve their models or services.
- Not used to train any generative AI foundation models without your permission or instruction.
- Not used to improve Microsoft or third-party products or services without your permission or instruction.
Fine-tuned Azure OpenAI models are available exclusively for the customer that created them. The inference models are stateless, so prompts and completions are not stored in the model.
When Meetily users select an Azure OpenAI deployment as their summary provider via BYOK, the request hits your Azure-hosted endpoint under your Azure subscription. The defaults on this page apply.
Retention
Azure OpenAI runs an abuse-monitoring system that may sample a subset of prompts and completions for review when its automated systems flag potentially abusive content. Sampled data is stored in a per-resource abuse-monitoring data store, isolated by customer, with a 30-day retention window. Human reviewers (authorized Microsoft employees, located in the EEA for EEA-deployed resources) access this data only under strict request-ID-based queries with just-in-time approval.
Stateful features (Responses API, Assistants Threads, Stored completions, file uploads, fine-tuning) carry their own retention semantics that you opt into when you use those features. Data stored for those features lives at rest in the Foundry resource in your Azure tenant, in the same geography as the resource, encrypted with AES-256 and optionally with a customer-managed key.
Zero data retention (modified abuse monitoring)
Azure OpenAI offers a documented path to disable abuse-monitoring data storage entirely, called modified abuse monitoring. Eligible customers apply through a Microsoft form, and approval is granted for sensitive use cases where the 30-day monitoring store would be incompatible with the customer's compliance posture.
After approval:
- Prompts and completions are not stored for human review.
- Automated review may still run at request time without storing the data.
- You can verify the off state via the Azure portal JSON view or Azure CLI: the ContentLogging capability appears as false only when monitoring storage is disabled.
This is one of the most explicit and customer-verifiable ZDR paths in the cloud LLM market.
Geographic processing
Azure OpenAI offers three deployment types with distinct processing-location semantics:
- Standard: Prompts and completions processed within the customer-specified geography.
- DataZone (US or EU): Processed within the named data zone; data at rest stays in the customer-designated geography.
- Global: Processed in any geography where the model is deployed; data at rest stays in the customer-designated geography.
For EU customers needing strict EU residency, Standard or DataZone EU deployments keep both processing and storage within Europe, and reviewers for any abuse-monitoring traffic are EEA-based.
How Meetily uses Azure OpenAI
Meetily routes Azure OpenAI traffic through your own API key against your Azure-hosted endpoint. Your subscription's deployment type, region, and abuse-monitoring configuration apply. The transcript text is sent over TLS to your Azure OpenAI endpoint for summarization, and the response is returned to Meetily and stored locally on your device. Audio is never transmitted to Azure at any point.
For Meetily users in regulated industries (healthcare, legal, financial services), the combination of Azure OpenAI's modified abuse monitoring plus Meetily's local transcription is one of the strongest cloud-summary paths available without going fully local. For zero retention by construction, switch to local Ollama and keep the entire pipeline on-device.