As artificial intelligence reshapes the legal landscape, New York’s recent regulation of AI-generated performers in advertising signals a broader reckoning with synthetic content. But while legislators debate disclosure requirements for digital avatars, a more immediate threat to legal practice is emerging from an unexpected source: AI-powered smart glasses that can silently record, transcribe, and analyze every conversation in their field of view.
The Invisible Recording Device
Modern AI smart glasses represent a fundamental shift in how recording technology intersects with professional practice. Unlike traditional cameras or smartphones that require deliberate activation, devices like Meta’s Ray-Bans and the controversial Halo X glasses can passively capture and transcribe conversations throughout the day, creating permanent searchable documented discussions that the participants were unaware of. Many of these devices lack obvious recording indicators, featuring only tiny LED lights easily missed in normal interaction or, in some cases, no external warning at all.
This technological evolution arrives at a critical moment. On December 11, 2025, Governor Kathy Hochul signed legislation requiring advertisers to conspicuously disclose when AI-generated synthetic performers appear in commercials, effective June 2026. The law, strongly supported by SAG-AFTRA, imposes civil penalties starting at one thousand dollars for violations. Yet the same day, a White House Executive Order sought to minimize AI regulation at the state level, creating immediate tension between state action and federal policy goals.
These parallel developments underscore a central tension: while synthetic content in advertising requires disclosure, the real-time capture of authentic human interaction by AI devices operates in a legal gray zone where decades-old wiretapping statutes struggle to keep pace with technology.
The Two-Party Consent Crisis
At the heart of the smart glasses dilemma lies a patchwork of state recording laws that create a compliance nightmare for legal professionals. Twelve states, including California, Florida, Illinois, Maryland, Massachusetts, Connecticut, Montana, New Hampshire, Pennsylvania, and Washington require all parties to consent to audio recording of confidential communications. In these jurisdictions, recording a conversation without explicit permission from every participant can constitute a felony, carrying criminal penalties including potential jail time and significant civil liability.
The stakes are particularly high in professional settings. Consider an attorney wearing AI glasses that automatically transcribe client meetings without explicit consent from all parties. In consideration of one party consent states as well, any recording where individuals expect privacy amounts to breach of it and even surveillance laws. The passive nature of AI transcription amplifies this risk exponentially: unlike traditional recording that requires deliberate action, these devices can continuously capture conversations, transforming casual hallway discussions, privileged attorney-client communications, and confidential settlement negotiations into permanent digital records.
Recent legal analysis highlights the acute dangers for specific scenarios. Sales representatives wearing AI glasses that automatically transcribe client meetings, managers using glasses with AI note-taking features during performance reviews or disciplinary meetings, medical professionals recording patient consultations, and OSHA inspectors using AI glasses to record workplace inspections without proper protocols all face substantial legal exposure. The federal Occupational Safety and Health Administration announced expanded deployment of AI-equipped smart glasses to safety inspectors in 2025, raising immediate privacy concerns about capturing employee conversations and activities without knowledge or consent.
Attorney-Client Privilege Under Siege
For the legal profession, AI note-taking capabilities present an existential threat to attorney-client privilege, the bedrock principle protecting confidential communications between lawyers and their clients. The privilege requires communications to remain confidential and limited to privileged persons. When AI systems enter the equation, fundamental questions arise: Does inputting privileged information into an AI platform constitute disclosure to a third party? Can privilege survive when AI providers retain, process, or potentially train their models on confidential legal communications?
The American Bar Association’s Formal Opinion 512, issued in July 2024, acknowledges that existing professional conduct rules must govern AI use, but provides limited specific guidance for the unique challenges posed by wearable AI devices. Multiple state bar associations have issued their own guidance, creating a complex compliance landscape that varies by jurisdiction.
The risk extends beyond recording. AI transcripts generated without consent or awareness create profound data minimization concerns, particularly as regulators increasingly emphasize limiting collection of personal information. Enterprise AI notetakers and transcription services often retain meeting recordings in their own cloud environments and use outputs to train their AI engines. Unless lawyers purchase licensed applications with appropriate data processing agreements, they risk exposing confidential client information to third parties, potentially destroying privilege entirely.
Courts have not yet directly addressed whether AI presence in privileged communications waives privilege, but analogous cases involving workplace email systems provide instructive precedent. When employees used company email to communicate with personal attorneys, courts found that placing messages in systems where third parties had access was equivalent to filing the information in company records, destroying any reasonable expectation of privacy. The parallels to AI systems with third-party access are unmistakable.
Compliance Strategies for the AI Era
Despite these formidable challenges, legal and compliance professionals can navigate the AI smart glasses landscape through proactive measures. Organizations should implement clear policies specifying when and where AI glasses with recording capabilities may be worn, with particular attention to sensitive locations like courtrooms, legislative buildings, attorney conference rooms, and areas where confidential business information is discussed.
Obtaining explicit verbal or written consent from all parties before activating recording features is non-negotiable. Consent banners on video calls do not suffice for glasses, participants must affirmatively acknowledge both the recording and the AI transcription occurring. Geofencing or technical controls that automatically disable recording features in prohibited areas offer technological solutions to human compliance failures.
For legal professionals specifically, the requirements are more stringent. Attorneys must review AI provider terms of service to ensure inputs and outputs will not be used for training purposes and that data access will be strictly limited. Disabling conversation history and opting out of model improvement programs should be standard practice. Organizations should treat AI-generated summaries with the same confidentiality protections as any other privileged communication, restricting access to employees with genuine need-to-know and never sharing outputs with third parties.
Training becomes crucial. Employees must understand state-specific wiretapping laws, particularly when traveling or conducting interstate communications. The fact that a practice is legal in their home state offers no protection when recording participants located in all-party consent jurisdictions. The strictest applicable law always governs.
Conclusion
As New York moves forward with synthetic performer disclosure requirements, the gap between regulating artificial content and governing real-time AI capture of authentic human interaction grows wider. The advertising law acknowledges that synthetic performers can undermine the public’s ability to distinguish fact from fiction, yet the unregulated deployment of AI smart glasses threatens something more fundamental: the reasonable expectation of private conversation itself.
The legal profession stands at a crossroads. Technology offers genuine benefits: hands-free information access, enhanced productivity, and innovative ways to serve clients. But convenience cannot supersede the imperative of protecting confidentiality, privilege, and fundamental privacy rights. Until comprehensive federal standards emerge, and given the current administration’s preference for minimal regulation, such standards may be distant. Professionals must navigate a complex patchwork of state laws with extreme caution.
AI smart glasses represent transformative technology with legitimate business value. Their adoption in legal settings, however, demands extraordinary care. The hidden legal minefield of two-party consent is that theoretical violations carry criminal penalties, civil liability, and the potential destruction of attorney-client privilege. In an era where synthetic performers require disclosure but authentic human recordings proceed in shadows, the legal profession must lead by example in balancing innovation with the timeless obligation to protect confidential communications.
Ishwarya Dhube is a third-year BBA LLB student who combines academic rigor with practical experience gained through multiple legal internships. Her work spans various areas of law, allowing her to develop a comprehensive understanding of legal practice. Ishwarya specializes in legal writing and analysis, bringing both business acumen and hands-on legal experience to her work.
* Views are personal







