FTC Charges Content At Scale AI With False Accuracy Claims
The company allegedly marketed an AI detector as 98.3% accurate when its real performance on marketing content was no better than a coin toss, misleading customers who paid $49 monthly subscriptions.
Content At Scale AI sold an AI Content Detector claiming 98.3% accuracy in identifying AI-generated text. The FTC alleges this claim was false and unsubstantiated. The company used a publicly available AI model trained on academic texts and marketed it for detecting blog posts and marketing content without proper testing. For non-academic content, the tool’s accuracy was allegedly around 53%, barely better than random chance. Students, writers, and marketers who relied on this tool faced potential wrongful accusations and rejected work while paying $49 monthly subscriptions.
This case shows how complex AI claims can mislead consumers when companies prioritize profit over product validation.
The Allegations: A Breakdown
| 01 | Content At Scale AI advertised its AI Content Detector with 98.3% accuracy in detecting AI-generated text from ChatGPT, GPT4, Claude, and Bard. The FTC alleges this claim was false and unsubstantiated for the marketing and plain language content the tool was sold to analyze. | high |
| 02 | The company used a publicly available AI model called RoBERTa-academic-detector, developed by Norwegian students for detecting machine-generated academic text. Content At Scale AI did not create, train, or fine-tune this model themselves. | high |
| 03 | The company claimed its detector was trained on blog posts, Wikipedia entries, and essays. In reality, the underlying AI model was trained exclusively on abstracts of scholarly articles, not the content types advertised. | high |
| 04 | Content At Scale AI did not conduct its own testing to verify the 98.3% accuracy claim for marketing and plain language text. They relied on test results from the original model developers, which measured performance on academic content only. | high |
| 05 | When tested on non-academic AI-generated content, the tool correctly identified AI text only 53.2% of the time. This performance is barely better than flipping a coin. | high |
| 06 | The company charged $49 per month for unlimited use of the AI Content Detector and premium features from September 2023 through August 2024. Customers paid for a service based on false performance claims. | high |
| 07 | Content At Scale AI marketed the tool to advertisers, marketers, students, and content creators who needed to distinguish human writing from AI-generated content. These users faced severe potential consequences including wrongful cheating accusations and rejected work. | high |
| 08 | The company attempted to explain away false positives by claiming the tool would flag robotic-sounding human writing as a feature to help users write more naturally. This framing obscured the detector’s fundamental inaccuracy. | medium |
| 01 | Content At Scale AI allegedly made false accuracy claims beginning in November 2022. The FTC complaint was not prepared until 2024-2025, allowing over two years of potentially deceptive practices to continue. | high |
| 02 | The rapid development of AI tools outpaced regulatory frameworks. No pre-market validation requirements existed to verify the company’s 98.3% accuracy claim before consumers were exposed to it. | medium |
| 03 | The complexity of AI technology created information asymmetry that benefited the company. Average consumers and businesses lacked the technical expertise to independently verify the advertised performance metrics. | medium |
| 04 | The FTC operates reactively in emerging technology sectors, addressing harms after consumers have already been affected rather than preventing deceptive claims before market entry. | medium |
| 05 | No clear regulatory standards existed defining what constitutes adequate substantiation for AI performance claims. Companies could make bold assertions about complex technology with minimal accountability until formal complaints were filed. | medium |
| 06 | The complaint does not detail actions against specific individuals within Content At Scale AI who made decisions leading to the deceptive claims. Corporate liability without executive accountability can limit deterrent effects. | medium |
| 01 | Content At Scale AI capitalized on widespread anxiety about distinguishing AI-generated content from human writing. The company exploited this fear by offering a solution with grossly exaggerated capabilities. | high |
| 02 | The company chose to use a free, publicly available AI model and market it with unverified claims rather than invest in developing or properly testing their own model for the advertised use cases. | high |
| 03 | Content At Scale AI launched a $49 monthly subscription service without conducting independent testing to verify the tool worked for marketing and plain language content. This prioritized rapid revenue generation over product integrity. | high |
| 04 | The 98.3% accuracy figure served as a powerful marketing tool to attract customers in a competitive market. The company allegedly inflated this number despite knowing or having reason to know it did not reflect actual performance. | high |
| 05 | The company misrepresented the training data for its AI model, claiming it was trained on blog posts and Wikipedia when it was actually trained on academic abstracts. This deception made the product appear suitable for uses it was not designed for. | high |
| 06 | Content At Scale AI profited from every subscription sold under false premises while users who trusted these claims faced negative consequences including wrongful accusations, rejected work, and wasted time. | high |
| 07 | The company used marketing language suggesting deeper involvement in model development with phrases like ‘Our AI checker now achieves higher levels of accuracy’ when they had not created or trained the underlying model. | medium |
| 01 | Customers who subscribed to the $49 monthly service from September 2023 through August 2024 paid for a tool whose actual accuracy was substantially lower than advertised. These users suffered direct financial losses. | high |
| 02 | Marketers and advertisers who relied on the AI Content Detector to refine written marketing content wasted time and resources making edits based on inaccurate feedback from a tool that performed at chance levels. | medium |
| 03 | Content flagged incorrectly as AI-generated could have impacted users’ search engine rankings, academic grading, and reader perceptions. Users faced negative economic consequences when acting on unreliable output. | medium |
| 04 | Freelance writers and small marketing agencies whose content was unjustly rejected based on false positives from the detector faced damage to their livelihoods and professional reputations. | medium |
| 05 | The deceptive practices eroded marketplace trust in AI detection tools generally. This erosion of trust can stifle adoption of legitimate and beneficial AI tools, harming the broader technology ecosystem. | low |
| 06 | Public resources were expended by the FTC to investigate and prosecute this case, representing taxpayer costs incurred to address harm caused by the company’s alleged deceptive practices. | low |
| 01 | Students who submitted legitimate human-written work faced the risk of being wrongly accused of cheating based on false positives from an inaccurate AI detector. These accusations could result in academic penalties and damaged educational records. | high |
| 02 | Journalists could have their articles rejected for publication when the faulty detector incorrectly flagged human-written content as AI-generated. This undermined professional credibility and career advancement. | high |
| 03 | Content creators in academic and professional communities who depend on authentic work saw their integrity questioned by a tool that performed no better than random chance. This eroded trust within these specialized communities. | medium |
| 04 | Educational institutions and businesses that adopted the AI Content Detector made flawed policy decisions based on a tool with grossly misrepresented capabilities. This led to unjust enforcement actions against innocent individuals. | medium |
| 05 | The widespread availability of false AI detection claims contributed to an environment of misinformation about AI capabilities. This made it harder for communities to navigate and adapt responsibly to generative AI technologies. | medium |
| 06 | Writers and marketers targeted by Content At Scale AI formed a community of users who believed they were protecting themselves from AI-related risks. Instead, they were deceived into paying for a tool that created new risks through inaccurate assessments. | medium |
| 01 | The FTC complaint represents an initial step toward accountability, but the extended timeline from November 2022 to 2025 allowed Content At Scale AI to profit from allegedly false claims for over two years before formal action. | high |
| 02 | Consent order settlements in similar cases often do not require admission of wrongdoing. This allows companies to avoid formally acknowledging deceptive practices while resolving legal challenges. | medium |
| 03 | Financial penalties in technology cases can be viewed by larger companies as merely a cost of doing business rather than a significant deterrent. The potential profits from deceptive practices may outweigh eventual fines. | medium |
| 04 | The regulatory time lag inherent in investigative and legal processes benefits companies engaging in misconduct. They continue to profit while regulatory wheels slowly turn, with consumers harmed throughout the delay. | medium |
| 05 | The complaint names Workado, LLC as the respondent but does not detail actions against specific individuals responsible for the deceptive claims. Personal accountability for executives is absent, limiting the deterrent effect. | medium |
| 06 | Monetary relief in such cases may not fully compensate all affected users for financial losses, reputational damage, or academic consequences. The harm done often exceeds what settlements can remedy. | medium |
| 07 | The system penalizes the corporate entity while individuals who orchestrated the misconduct face few personal consequences. This structural weakness allows executives to make decisions that prioritize profit over accuracy with limited personal risk. | medium |
| 01 | Content At Scale AI prominently displayed ‘98% accuracy’ and ‘98.3% accuracy’ claims on its website and in GoogleAds and YouTube videos. The FTC alleges this figure was a significant inflation of actual performance. | high |
| 02 | The company claimed its AI Detector was ‘Trained on blog posts, Wikipedia, essays, and more’ when the underlying model was actually trained on abstracts of scholarly articles. This created a false impression of suitability for advertised uses. | high |
| 03 | Marketing language like ‘Our AI checker now achieves higher levels of accuracy’ and ‘Our AI Detector can predict’ implied the company had deeper involvement in model development than reality. They used a publicly available model without modification. | high |
| 04 | The company marketed itself as ‘one of the most trusted’ detectors that ‘goes deeper than a generic AI content detector.’ These trust-building phrases were not substantiated by evidence of superior performance. | medium |
| 05 | Content At Scale AI attempted to explain away false positives by framing them as a helpful feature. The marketing claimed the tool would flag robotic-sounding human writing to help users write more naturally, obscuring fundamental accuracy problems. | medium |
| 06 | The company used confident, positive language throughout its marketing materials to create a narrative of a highly effective tool. This masked the alleged underlying deficiencies in the detector’s actual capabilities. | medium |
| 07 | By presenting a simple, impressive accuracy number, Content At Scale AI exploited the technical complexity of AI that makes it difficult for average users to scrutinize performance claims. The black box nature of the technology served as a shield. | medium |
| 01 | Content At Scale AI generated revenue from $49 monthly subscriptions by leveraging impressive but allegedly false claims of 98.3% accuracy. The company transferred wealth from customers to itself based on misleading information. | high |
| 02 | The company chose to use a free, publicly available AI model and market it without proper testing. This strategy minimized costs while maximizing revenue through bold marketing claims rather than genuine innovation. | high |
| 03 | Individuals and entities behind Content At Scale AI stood to gain financially from every subscription sold under false premises. Meanwhile, users faced negative consequences including wrongful accusations and rejected work. | medium |
| 04 | The wealth generated by the company through alleged deceptive practices does not reflect value provided to customers. Instead, it represents value extracted under false pretenses from students, writers, and marketers. | medium |
| 05 | Content At Scale AI profited by exploiting anxieties about AI-generated content in education and marketing. The company monetized fears about content authenticity and SEO penalties with a tool that allegedly failed to deliver its core promise. | medium |
| 06 | The pressure to generate profit incentivized the company to cut corners on product validation. This approach prioritizes immediate financial gain over ethical conduct and consumer protection, reflecting broader patterns in unregulated capitalism. | medium |
| 01 | Content At Scale AI allegedly began advertising false accuracy claims in November 2022. The FTC complaint was not prepared until 2024-2025, creating a window of over two years where the company operated without regulatory intervention. | high |
| 02 | The paid subscription model launched in September 2023 and continued until August 2024. Each month the $49 subscription remained active represented continued revenue generated under what the FTC deems false pretenses. | high |
| 03 | During the extended period before regulatory action, consumers and businesses made decisions based on false claims. Users potentially suffered wrongful accusations, rejected work, and wasted subscription fees throughout this delay. | medium |
| 04 | The speed of business innovation and marketing in the technology sector outpaced regulatory response. Investigations require time for evidence gathering and legal processes have inherent delays that benefit companies engaged in questionable practices. | medium |
| 05 | Even if eventually penalized, profits accumulated during the period of non-compliance can be substantial. For some companies, these profits may outweigh eventual fines or settlements, creating an incentive structure that rewards delay. | medium |
| 06 | The longer the enforcement delay, the more entrenched a product becomes in the market. The extended timeline made it harder to unwind the effects of misleading information that had spread to thousands of users. | medium |
| 01 | Content At Scale AI allegedly marketed an AI Content Detector with 98.3% accuracy when its real performance on marketing content was 53.2%, barely better than random chance. This represents a fundamental breach of trust with customers. | high |
| 02 | The case illustrates systemic failures in how modern economies allow misleading claims about complex technology to proliferate. Companies can exploit information asymmetries when selling sophisticated AI tools to non-expert consumers. | high |
| 03 | Students faced wrongful cheating accusations, writers had work rejected, and all paying subscribers allegedly received a defective product. The human cost of false AI detection extends beyond financial losses to reputational and academic harm. | high |
| 04 | The extended timeline from the start of alleged false advertising in 2022 to regulatory action in 2025 shows how delays benefit companies engaged in deception. Reactive enforcement allows harm to accumulate before intervention occurs. | medium |
| 05 | This case underscores the urgent need for greater transparency in AI marketing, rigorous substantiation requirements for performance claims, and stronger regulatory oversight to prevent deceptive profit-seeking in emerging technology sectors. | medium |
| 06 | The alleged deception by Content At Scale AI is not an isolated incident but reflects systemic pressures in unregulated AI markets. When profit motives are prioritized over product validation, consumers become vulnerable to sophisticated technological fraud. | medium |
Timeline of Events
Direct Quotes from the Legal Record
“Respondent has claimed that its AI Content Detector could predict with 98.3% accuracy whether text was generated by AI technologies like ChatGPT, GPT4, Claude, or Bard.”
💡 This is the central false claim that the entire FTC case is built upon, allegedly deceiving thousands of customers.
“When evaluating a mix of human-created and AI-generated content, the model correctly distinguished AI from human content at a rate substantially lower than 98.3%. For non-academic, AI-generated content, the tool was found to be accurate around half the time, performing barely better than a coin toss.”
💡 This shows the dramatic gap between what was promised and what the tool actually delivered to customers.
“The AI model used by Respondent was not created, trained, or fine-tuned by them; instead, it was a publicly available model called ‘RoBERTa-academic-detector,’ developed by students in Norway for detecting machine-generated academic text.”
💡 The company took a free, publicly available tool and marketed it with false claims rather than developing their own validated solution.
“The developers of this model trained it on abstracts of scholarly articles, not the ‘blog posts and Wikipedia entries’ that Content At Scale AI claimed its detector was trained on.”
💡 This proves the company knowingly misrepresented what the tool was designed to detect, making it unsuitable for advertised uses.
“Content At Scale AI allegedly did not conduct its own testing to verify the 98.3% accuracy claim for the type of content its users—primarily advertisers, marketers, and students—would typically submit, which is marketing and other plain language text.”
💡 The company charged money for a product without verifying it worked for the purposes it was sold for.
“AI-detected writing can influence search engine rankings, academic grading, and reader perceptions… such as a student wrongly accused of cheating or a journalist’s article rejected for publication.”
💡 This shows the real human cost of false AI detection, extending far beyond financial losses to academic and professional harm.
“From at least November 2022 for the free tool, with paid subscriptions from September 2023… $49 per month for unlimited use of the AI Content Detector and other premium features.”
💡 Customers paid substantial ongoing fees for a service that allegedly performed no better than chance on the content they needed analyzed.
“Our AI checker is one of the most trusted and goes deeper than a generic AI content detector.”
💡 The company used trust-building language to market a tool whose performance was allegedly grossly misrepresented.
“In fact, it’s so good, it will even flag human-written text as AI if it sounds robotic. (Let’s face it not all of us are great writers. In this case, the AI Detector will help you write more naturally, with better readability!)”
💡 This shows how the company attempted to frame a fundamental flaw as a helpful feature, obscuring the detector’s inaccuracy.
“For detecting AI-generated non-academic text… the tool was accurate around half the time… merely 53.2% of the time.”
💡 This specific number proves the detector performed barely better than random chance for its marketed use cases.
“Proposed Respondent neither admits nor denies any of the allegations in the Complaint, except as specifically stated in the Decision and Order.”
💡 This standard consent order language allows the company to resolve legal challenges without formally acknowledging wrongdoing.
“The Proposed Respondent is Workado, LLC, formerly known as Content At Scale AI, an Arizona limited liability company with its principal office or place of business at 15333 N. Pima Road, Suite 260, Scottsdale, Arizona 85260.”
💡 This identifies the specific corporate entity responsible for the alleged deceptive practices.
Frequently Asked Questions
There is a press release on the FTC’s website demanding that Workado release proof of Content At Scale’s 98% accuracy claims: https://www.ftc.gov/news-events/news/press-releases/2025/04/ftc-order-requires-workado-back-artificial-intelligence-detection-claims
💡 Explore Corporate Misconduct by Category
Corporations harm people every day — from wage theft to pollution. Learn more by exploring key areas of injustice.
- 💀 Product Safety Violations — When companies risk lives for profit.
- 🌿 Environmental Violations — Pollution, ecological collapse, and unchecked greed.
- 💼 Labor Exploitation — Wage theft, worker abuse, and unsafe conditions.
- 🛡️ Data Breaches & Privacy Abuses — Misuse and mishandling of personal information.
- 💵 Financial Fraud & Corruption — Lies, scams, and executive impunity.