TL;DR: A new federal class action lawsuit accuses Google of quietly switching on its Gemini AI “Smart features” across Gmail, Chat, and Meet around October 10, 2025, so the company could read, analyze, and store years of private messages and attachments by default.
The lawsuit says users never gave informed consent, even as Google’s settings screen still framed this AI access as something people had to choose to turn on. The case describes a system that turns intimate details of daily life (finances, health, religion, politics, children’s activities) into corporate data for Google’s own corporate profits.
Keep reading for how this happened, who it affects, and what it reveals about power, profit, and privacy in today’s economy.
AI Turned Everyday Email Into a Data Mine
The lawsuit lays out a simple core claim.
Around October 10, 2025, Google flipped a switch and turned on Gemini-based “Smart features” for every user’s Gmail, Chat, and Meet accounts. According to the complaint, this change allowed Google’s AI to scan, track, and store the entire history of users’ emails, attachments, chats, and video conversations without their knowledge or consent. The setting sat buried in “Data privacy,” but the toggle still read like an invitation: “When you turn this setting on, you agree…” even though the company had already activated it.
The scale is enormous. The lawsuit cites estimates that Gmail has about 1.8 billion users worldwide in 2025, including over 130 million Americans. Many of those Americans rely on Gmail, Chat, and Meet as daily infrastructure for work, family life, and basic communication.
The named plaintiff (person filing the lawsuit), an Illinois resident, describes years of use of these password-protected services. The complaint says his account reveals financial records, employment information, religious and political activity, medical information, the identities of family and friends, and detailed habits such as shopping, exercise, social life, and involvement in his children’s activities!
Gemini gained access to all of this once the company switched the setting on.
Timeline of the Alleged Misconduct
| Date / Period | Event | Impact on Users (as alleged) |
|---|---|---|
| 2012 | Google announces a policy to combine information from different products and treat each person as “a single user across all our products.” | Creates a foundation for unified data profiles that can merge information from multiple services into one commercial picture of each person. |
| 2018 | A vulnerability in Google+ exposes personal account information. Reporting says the company delayed disclosure to protect its public image and later shut down Google+. | Shows a past example where user data was exposed, and disclosure lagged behind the problem. |
| Oct. 10, 2025 (on or about) | Google allegedly turns on “Smart features” across Gmail, Chat, and Meet accounts without notice. | Gemini AI starts scanning and analyzing emails, attachments, chats, and meetings by default, according to the legal complaint. |
| Nov. 5, 2025 | A tech journalist publicly describes Gemini in Gmail as “downright creepy,” noting its ability to infer intimate personal details from years of email history after being granted access. | Offers an early public glimpse of how deeply Gemini can reach into personal histories once access is enabled. |
| Nov. 11, 2025 | A federal class action is filed in the Northern District of California on behalf of U.S. users whose Gmail, Chat, and Meet communications were allegedly tracked by Gemini. | Millions of users are proposed as class members whose private messages were turned into AI-readable data streams. |
This system converts everyday communication into raw material for corporate advantage. The harm sits at the level of trust: people believed they controlled whether AI scanned their inboxes, while the alleged configuration reversed that control behind the scenes.
How Google’s AI Access Works and Why Users Feel Betrayed
Google’s own interface becomes a key piece of evidence. The “Smart features” control appears under “Data privacy” and describes a choice: if you turn this setting on, you agree to let Gmail, Chat, and Meet use your email, chat, and video content to personalize your experience. The lawsuit says that, as of October 10, 2025, the setting was already on for users, even while the language continued to suggest a voluntary opt-in.
The complaint describes Gemini’s reach through a tech journalist’s experience. Once given access to sixteen years of emails, Gemini could answer questions about the writer’s first crush, specific friendships from 2009 and 2010, favorite video games, and even personality flaws inferred from years of correspondence. The article quoted in the legal complaint ends with a clear reaction: the writer chose to remove Gemini’s access to personal email after seeing how deep the AI could go.
Now apply that level of insight to over 130 million Americans and 1.8 billion users worldwide. The lawsuit argues that the system gives Google the ability to reconstruct each person’s financial life, health concerns, religious and political activity, contacts, habits, and family routines through automated analysis. Users believe they are writing to a doctor, a teacher, a union organizer, a therapist, a partner. The complaint says Gemini turns each of those exchanges into data points in a vast behavioral model.
The legal filing stresses the emotional and social stakes. It points to widely shared public expectations: large majorities of adults in national polls want control over who gets information about them, what is collected, and whether companies watch or listen without permission. Many people regularly clear cookies and browser histories or take other steps to avoid tracking. The lawsuit uses this research to show that users treat privacy protections as real safeguards, not decorative settings.
Profit-Maximization Over Privacy: Data as a Core Business Model
The complaint describes Google as a “sophisticated” company whose business model centers on commercializing personal data gathered from internet-connected tools and services. Gemini AI extends this model into the most intimate corners of users’ lives.
In neoliberal capitalism, revenue growth depends on finding new sources of extraction. For a tech platform, that means more data, finer profiling, and deeper integration into daily routines. Email, chat, and video become sources of behavioral insight. Gemini’s ability to reconstruct personal histories from inbox archives turns each old message into a new asset.
The lawsuit alleges that Gemini’s unauthorized access lets Google conduct “unlimited analysis” of private communications and cross-reference them with existing user profiles. That analysis creates “monetizable insights” into social, professional, and other relationships.
This is monetizing harm. The harm is the loss of control over intimate communication. The revenue comes from using that loss as a data stream to refine advertising, product design, and predictive models. The more exposed the user, the richer the profile. The more intense the surveillance, the more valuable the dataset.
Under a profit-first system, corporate leaders receive strong incentives to treat privacy as a variable cost. If clandestine data capture produces significant commercial advantage and legal accountability lags, the rational business calculation tilts toward risk. The complaint frames Gemini’s deployment as exactly that kind of move.
Regulatory Loopholes, Corporate Ethics, and Neoliberal Capitalism
The lawsuit leans on a stack of laws: the California Invasion of Privacy Act, the California Computer Data Access and Fraud Act, the federal Stored Communications Act, California’s constitutional right to privacy, and the common-law doctrine of intrusion upon seclusion. Together, these provisions aim to guard confidential communications and limit unauthorized computer access.
The need to invoke so many overlapping laws illustrates a key feature of neoliberal capitalism. Legislators pass privacy rules. Courts recognize privacy torts. Yet large technology firms still enjoy wide practical freedom to design systems that test the limits of consent and authorization. Enforcement arrives case by case, years after product decisions. During that window, companies can collect enormous troves of data.
Google’s behavior was “egregious breaches of social norms,” “highly offensive,” and “malicious, oppressive, and willful.” in the lawsuit which claims they intentionally intruded into private affairs through AI tracking and did so in conscious disregard of privacy rights.
This alleged pattern reflects a broader structural reality. Neoliberal policy favors light regulation, self-policing, and market solutions. In such a system, corporations wield the resources to hire lawyers, lobbyists, and engineers who can map each legal border and push up against it. Regulators and courts respond slowly, often after damage spreads across millions of people. The legal system functions as late-stage clean-up, not real-time guardrail.
Economic Fallout and Public Costs of Data Exploitation
The complaint focuses on privacy harms rather than line-item financial losses, yet the economic dimension is clear. When a company with hundreds of millions of users builds massive behavioral profiles, the value flows to the firm. The exposure risk flows to the public.
The lawsuit describes the danger of data breaches and security vulnerabilities when companies aggregate sensitive information. It points to the 2018 Google+ incident, where a vulnerability exposed personal account information to outsiders. Reporting described in the complaint says the company delayed disclosure for months and only admitted the problem after the media discovered it.
This history matters. When Gemini channels years of private messages into centralized storage and analysis, the attack surface grows. A successful breach would not just expose usernames or basic demographics. It could surface medical conversations, financial details, political organizing, religious activities, and children’s schedules. Cleaning up such an incident would require public resources—law enforcement, regulators, courts—alongside personal costs for victims who must repair credit, shift routines, and live with heightened risk.
Neoliberal capitalism routinely transfers this kind of systemic risk onto individuals. Profit stays privatized. Fallout spreads outward.
Community and Family Impact: When Every Message Becomes Data
The plaintiff’s account illustrates how deeply Gemini’s access can reach inside households. According to the complaint, an AI system switched on without consent can learn:
- Financial information and records
- Employment information and records
- Religious affiliations and activities
- Political affiliations and activities
- Medical care and records
- Identities of family, friends, and other contacts
- Social, shopping, eating, and exercise habits
- The extent of involvement in children’s activities and the nature of those activities
This is not abstract “metadata.” These are the building blocks of moral, emotional, and civic life. Parents try to shield their children’s communications. Workers rely on private channels to navigate job searches or workplace conflicts. Patients use email to coordinate care. Organizers build campaigns through digital tools.
The complaint argues that users took specific steps to protect this information. They used password-protected accounts, two-factor authentication, and platform privacy settings. They relied on Google’s statements that they could adjust data privacy to control whether AI accessed Gmail, Chat, and Meet content. When a corporation overrides those settings, it erodes trust in digital infrastructure itself.
This erosion hits marginalized people first. Those facing discrimination, political retaliation, or surveillance already experience higher stakes around privacy. A system that quietly folds their communications into corporate AI experimentation amplifies existing inequality and deepens wealth disparity over time.
Legal Minimalism and the PR Machine
The complaint paints a picture of a corporation that embraces legal minimalism. It points to earlier public statements by Google’s Chief Privacy Officer, who told the U.S. Senate that users trust the company to keep personal information “confidential and under their control.” The lawsuit contrasts that language with the alleged reality of Gemini’s secret switch-on.
It also highlights Google’s messaging about privacy controls. In its policies, the company says users can adjust their privacy settings to control what data it collects and how it uses that information. The Gemini toggle appears to embody that promise. The lawsuit claims the opposite happened in practice: users who believed they were in control had their setting flipped on behind their backs.
This is legal minimalism at work. The company maintains polished privacy policies and testimony. It offers a visible setting labeled as a choice. Then, according to the lawsuit, it treats that choice as a branding layer while engineering the product so that data capture proceeds automatically.
In late-stage capitalism, this pattern is common. Corporations comply with the form of transparency—blog posts, dashboards, toggles—while undermining the substance of meaningful consent.
How Capitalism Exploits Delay and Complexity
The lawsuit describes conduct that began on or about October 10, 2025 and continued “to the present” at the time of filing in November. That timeline shows an alleged month-long period where AI tracking continued as default before a lawsuit challenged it. For many users, the feature may still be active unless they have discovered the setting and turned it off on their own!
Time functions as a resource. A company can implement a privacy-invasive feature, harvest data while it operates, and then fight over legality in court later. The complaint seeks injunctions, destruction of data, and damages. Any eventual resolution will arrive long after the initial switch.
Corporate complexity adds another shield. Users rely on lengthy terms of service and privacy policies that reference artificial intelligence in broad language. The company’s description of AI centers on benign examples like translation and spam detection. The lawsuit argues that this framing does not match the depth of Gemini’s access to private communications.
In modern day neoliberal systems, delay and complexity are strategic assets for the powerful corporation to get even more power. They allow profit from contested practices in the present while shifting accountability into a slow, technical future.
💡 Explore Corporate Misconduct by Category
Corporations harm people every day — from wage theft to pollution. Learn more by exploring key areas of injustice.
- 💀 Product Safety Violations — When companies risk lives for profit.
- 🌿 Environmental Violations — Pollution, ecological collapse, and unchecked greed.
- 💼 Labor Exploitation — Wage theft, worker abuse, and unsafe conditions.
- 🛡️ Data Breaches & Privacy Abuses — Misuse and mishandling of personal information.
- 💵 Financial Fraud & Corruption — Lies, scams, and executive impunity.
NOTE:
This website is facing massive amounts of headwind trying to procure the lawsuits relating to corporate misconduct. We are being pimp-slapped by a quadruple whammy:
- The Trump regime's reversal of the laws & regulations meant to protect us is making it so victims are no longer filing lawsuits for shit which was previously illegal.
- Donald Trump's defunding of regulatory agencies led to the frequency of enforcement actions severely decreasing. What's more, the quality of the enforcement actions has also plummeted.
- The GOP's insistence on cutting the healthcare funding for millions of Americans in order to give their billionaire donors additional tax cuts has recently shut the government down. This government shut down has also impacted the aforementioned defunded agencies capabilities to crack down on evil-doers. Donald Trump has since threatened to make these agency shutdowns permanent on account of them being "democrat agencies".
- My access to the LexisNexis legal research platform got revoked. This isn't related to Trump or anything, but it still hurt as I'm being forced to scrounge around public sources to find legal documents now. Sadge.
All four of these factors are severely limiting my ability to access stories of corporate misconduct.
Due to this, I have temporarily decreased the amount of articles published everyday from 5 down to 3, and I will also be publishing articles from previous years as I was fortunate enough to download a butt load of EPA documents back in 2022 and 2023 to make YouTube videos with.... This also means that you'll be seeing many more environmental violation stories going forward :3
Thank you for your attention to this matter,
Aleeia (owner and publisher of www.evilcorporations.com)
Also, can we talk about how ICE has a $170 billion annual budget, while the EPA-- which protects the air we breathe and water we drink-- barely clocks $4 billion? Just something to think about....