
TL;DR
Are AI meeting assistants safe? Not all of them. In 2025, Otter.ai faced a class action lawsuit(opens in new tab) for recording without consent, Fireflies.ai was sued for biometric data collection(opens in new tab), and Chapman University banned Read AI(opens in new tab) over security concerns. Cloud meeting tools pose real privacy risks: data stored on third-party servers, AI training on your conversations, and recordings without proper consent. For truly safe meeting transcription, local processing (like Meetily) keeps your data on your device.
Over 62% of organizations(opens in new tab) now use AI meeting assistants. The market has exploded to $3.5 billion in 2025(opens in new tab) and is projected to reach $34 billion by 2035.
But as adoption grows, so do the lawsuits, university bans, and privacy scandals.
If you're using cloud-based meeting AI, you need to understand the risks. And if you're evaluating options, this guide will help you separate the safe tools from the problematic ones.
The Wake-Up Call: 2025's AI Meeting Assistant Lawsuits
Otter.ai Class Action (August 2025)
In August 2025, Justin Brewer filed a class action lawsuit(opens in new tab) against Otter.ai in California federal court. The allegations are serious:
"The company's AI-powered transcription service... by default does not ask meeting attendees for permission to record and fails to alert participants that recordings are shared with Otter to improve its artificial intelligence systems."
Key allegations from the complaint(opens in new tab):
- Otter's "OtterPilot" auto-joins meetings without consent from all participants
- Meeting data is used to train Otter's AI models without explicit permission
- The service outsources consent obligations to customers rather than obtaining it directly
- Violations of the Electronic Communications Privacy Act (ECPA)(opens in new tab), Computer Fraud and Abuse Act (CFAA)(opens in new tab), and California Invasion of Privacy Act (CIPA)(opens in new tab)
The case (Brewer v. Otter.ai Inc., Case No. 5:25-cv-06911) is still pending, with Otter's response deadline extended to November 2025(opens in new tab).
California Is a Two-Party Consent State
California law requires consent from ALL parties(opens in new tab) before recording a conversation. Otter's approach of seeking consent only from the meeting host-not all participants-directly conflicts with this requirement.
Fireflies.ai Lawsuit (December 2025)
Just months later, Fireflies.ai faced its own legal challenge(opens in new tab). Illinois resident Katelin Cruz filed a complaint alleging:
"Fireflies.AI's meeting assistant records, analyzes, transcribes, and stores the unique vocal characteristics (i.e., 'voiceprints') of every meeting participant... including people who never created a Fireflies account, never agreed to its terms of service, and never gave written consent."
The lawsuit alleges violations of Illinois' Biometric Information Privacy Act (BIPA)(opens in new tab):
- Collection of biometric voice data without consent
- No published policy on data retention or destruction
- Users report the bot continues joining meetings even after deleting their account(opens in new tab)
Chapman University Bans Read AI (August 2025)
Academic institutions are taking action too. In August 2025, Chapman University's Information Systems department issued a security notice(opens in new tab):
"After investigation, the use of Read AI is prohibited due to security, privacy, and institutional data risks."
The university cited concerns about:
- Data leaving institutional control
- Lack of adequate data protection
- Security vulnerabilities in free AI tools
This wasn't an isolated decision. Organizations across healthcare, education, and government are increasingly restricting cloud-based meeting AI.
Why Cloud Meeting AI Poses Privacy Risks
The lawsuits above aren't random-they reflect systemic problems with how cloud meeting tools handle your data.
1. Your Conversations Train Their AI
Most cloud meeting tools use your transcripts to improve their models. From Otter.ai's own documentation(opens in new tab):
"Otter uses a proprietary method to de-identify user data before training our models."
Even "de-identified" data creates risks. The Otter.ai lawsuit alleges(opens in new tab) that participants weren't informed their conversations would train AI-a significant consent violation.
2. Data Stored on Third-Party Servers
When you use cloud meeting AI, your recordings live on someone else's servers:
| Provider | Data Location | Your Control |
|---|---|---|
| Otter.ai | AWS (US) | Limited |
| Fireflies | Cloud (US) | Limited |
| Read AI | Cloud | Limited |
| Meetily | Your device | Complete |
This matters because:
- 29% of AI breaches stem from third-party SaaS platforms(opens in new tab)
- The average cost of an AI-related breach is $5.72 million(opens in new tab)
- Organizations with unmonitored AI face breach costs $670,000 higher(opens in new tab) than those with controls
3. Auto-Join Without Consent
A particularly concerning pattern: cloud meeting tools that automatically join meetings without explicit permission from all participants.
From user complaints about Fireflies(opens in new tab):
"By default, Fireflies joins every calendar event and shares meeting notes with all attendees... Trying to shut it up/switch it off is like trying to remove a deer tick from your leg."
This isn't just annoying-it potentially violates wiretapping laws in 12 states that require all-party consent(opens in new tab):
- California
- Florida
- Illinois
- Maryland
- Massachusetts
- Michigan
- Montana
- New Hampshire
- Pennsylvania
- Washington
- Connecticut (civil liability)
- Delaware
4. Shadow AI Is Exploding
Gartner predicts 40% of data breaches will involve "shadow AI"(opens in new tab) by 2027-unauthorized AI tools used by employees without IT approval.
Meeting AI is particularly vulnerable because:
- Employees install browser extensions without oversight
- Calendar integrations automatically activate
- IT may not know which meetings are being recorded
40% of organizations have already experienced an AI-related privacy incident(opens in new tab), and 97% of breached organizations lacked adequate AI access controls(opens in new tab).
Cloud vs. Local: A Security Comparison
| Feature | Local AI (Meetily)Recommended | Cloud AI (Otter, Fireflies) |
|---|---|---|
| Data LocationWhere recordings are stored | Your device only | Third-party servers |
| AI TrainingYour data used to improve models | Never | Yes (de-identified) |
| Consent RequiredWho must approve recording | You decide | Host only (often) |
| Breach ExposureRisk if provider is hacked | None (no cloud) | High |
| Offline CapabilityWorks without internet | Full | None |
| Compliance ControlHIPAA/GDPR by design | Complete | Limited |
| Audit TransparencyCan verify what happens to data | Open source | Proprietary |
Who Should Avoid Cloud Meeting AI?
Based on the legal landscape and compliance requirements, these organizations should think twice about cloud-based meeting transcription:
Healthcare Organizations (HIPAA)
Meeting recordings often contain Protected Health Information (PHI). While Otter.ai achieved HIPAA compliance in July 2025(opens in new tab) and Fireflies launched "Fireflies for Healthcare"(opens in new tab), these require:
- Enterprise plans ($25-35+/user/month)
- Business Associate Agreements (BAAs)
- Trust in third-party data handling
Local processing eliminates third-party risk entirely-no BAA needed when PHI never leaves your device.
Legal Firms (Privilege Concerns)
Legal experts warn(opens in new tab) that allowing meeting AI vendors to access transcripts could waive attorney-client privilege:
"Allowing note-taker vendors to access or use transcripts for their own purposes could provide grounds for a waiver of privilege, which is problematic in the event of a legal dispute."
Financial Services (SOX, PCI, SEC)
Financial regulations require strict data controls. GSC data shows active searches(opens in new tab) for "ai meeting assistants data security financial" and "ai meeting assistant financial regulations."
Cloud tools create audit complexity that local processing avoids.
Education (FERPA, Institutional Policies)
Following Chapman University's lead, many institutions are implementing AI tool review processes(opens in new tab). Cloud meeting AI often fails these reviews.
European Organizations (GDPR)
Fireflies processes data on US servers(opens in new tab), creating GDPR transfer issues. The GDPR requires(opens in new tab) adequate protection for data leaving the EU-US cloud services face ongoing legal uncertainty here.
GDPR Tip: Consent Isn't Always Enough
Even with consent, GDPR's data minimization principle(opens in new tab) may conflict with meeting AI that records, stores, and analyzes everything. Local processing with user-controlled retention is the safest approach.
How to Evaluate Meeting AI Security
Before choosing a meeting assistant, ask these questions:
1. Where Does Data Go?
- Best: Stays on your device (local processing)
- Acceptable: Your company's cloud (self-hosted)
- Risky: Vendor's cloud servers
2. Is It Bot-Free?
Visible meeting bots create friction and often trigger the consent issues at the heart of current lawsuits. Bot-free options:
- Meetily - Captures system audio locally
- Granola - Mac-native, bot-free (but cloud processing)
- Platform-native - Zoom AI, Microsoft Copilot (no external bot)
3. What's the Consent Model?
- Does it require consent from ALL participants?
- Can it auto-join without explicit approval?
- What happens when someone says "don't record"?
4. Is the Code Auditable?
Open source tools let you verify exactly what happens to your data. Proprietary tools require trust.
| Tool | Open Source | Auditable |
|---|---|---|
| Meetily | Yes (MIT) | Full |
| Otter.ai | No | No |
| Fireflies | No | No |
| Whisper (model) | Yes | Full |
5. What's the Data Retention Policy?
The Fireflies lawsuit specifically cites(opens in new tab) lack of published retention/destruction policies as a BIPA violation.
The Local-First Alternative
If the risks above concern you, local AI processing solves them architecturally-not through policy promises, but through technical design.
How local meeting AI works:
- Audio captured from your device's system audio
- Transcription runs on your CPU/GPU using models like OpenAI Whisper(opens in new tab)
- Summaries generated locally (or with your chosen API)
- All data stays on your machine
What this means for privacy:
- No server breach can expose your meetings (data isn't there)
- No consent ambiguity (you control the recording)
- No AI training on your conversations (data never leaves)
- Complete GDPR/HIPAA compliance (no third-party processors)
Try Local Meeting AI
Meetily is free, open source, and processes everything on your device. No cloud, no bots, no compromise.
Download FreeBest Practices for Any Meeting AI
Whether you choose cloud or local, follow these guidelines:
For Organizations
- Create an approved tools list - Don't let employees install random meeting AI
- Review privacy policies - Especially data training and retention clauses
- Establish consent protocols - How do you notify participants?
- Consider jurisdiction - 12 states require all-party consent(opens in new tab)
- Audit regularly - What AI tools have calendar access?
For Individuals
- Read the privacy policy - Especially the AI training section
- Check auto-join settings - Disable if possible
- Announce recordings - Even if the tool doesn't
- Review data retention - Delete recordings you don't need
- Consider local options - For sensitive conversations
For Meetings with External Parties
- Always disclose AI recording - Legal and ethical requirement
- Offer to disable for sensitive topics - Build trust
- Don't rely on bot detection - Some tools don't make bots obvious
- Get explicit consent - Verbal isn't enough in some states
Frequently Asked Questions
Frequently Asked Questions
Key Takeaways
- 1Cloud meeting AI faces serious legal challenges: Otter.ai and Fireflies.ai both face 2025 lawsuits over consent and data practices
- 2Chapman University banned Read AI over privacy concerns-more institutional bans are likely
- 312 US states require all-party consent for recording, creating legal risk for auto-join meeting bots
- 440% of organizations have experienced AI-related privacy incidents; 97% lacked adequate controls
- 5Local AI processing (like Meetily) eliminates third-party data exposure by keeping everything on your device
- 6For HIPAA, GDPR, and legal compliance, local processing is architecturally safer than cloud promises
- 7Always disclose AI recording to meeting participants-its both legally and ethically required
Conclusion
The AI meeting assistant market is at an inflection point. Rapid adoption (62% of organizations) has outpaced privacy protections, leading to the lawsuits and bans of 2025.
The pattern is clear:
- Cloud tools that auto-join meetings are legally vulnerable
- Data used for AI training without explicit consent violates privacy expectations
- Institutions are starting to ban tools that can't prove adequate data protection
If your meetings contain sensitive information-client discussions, health data, legal strategy, financial details-cloud meeting AI creates unnecessary risk.
Local processing isn't just a privacy preference; it's becoming a compliance requirement.
Tools like Meetily that keep data on your device solve these problems architecturally. No cloud breach can expose your meetings. No ambiguous consent model. No third-party AI training on your conversations.
The lawsuits will be decided in court. But you don't have to wait to protect your organization's most sensitive conversations.
For more on meeting privacy, see our guides on GDPR-compliant meeting transcription and HIPAA meeting requirements.
Privacy-First Meeting AI
Meetily is free, open source, and keeps your meeting data where it belongs-on your device.
Download FreeGet Started with Meetily
Meetily Pro
Advanced features for teams with priority support.
Business
Volume licensing & dedicated support for teams.


