Meta Platforms, the multinational conglomerate behind Facebook and Instagram, is facing a significant new legal challenge in its home state of California. Santa Clara County filed a lawsuit earlier this week alleging that the corporation knowingly profits from fraudulent advertising on its platforms. The suit claims that scam ads generate approximately $7 billion in annual revenue for the company, which continues to dominate the global social media landscape.
This litigation adds to a growing mountain of legal pressure on the tech giant.
According to Santa Clara County officials, Meta "facilitates and monetizes" deception through its current advertisement moderation protocols. The lawsuit follows a landmark ruling in March that found the company intentionally harmed young users with addictive design features. Meta’s financial performance remains robust, with revenues exceeding $200 billion in 2025, yet its ethical standards are increasingly under fire. The Consumer Federation of America has also raised similar concerns, stating that Meta’s approach toward fraudulent actors violates essential consumer protection laws.
The core of the allegation rests on Meta`s internal thresholds for banning suspicious marketers. A 2025 investigation by Reuters revealed that the company’s system only bans advertisers when it is at least 95 percent certain that fraud is being committed. For suspected scammers who fall below this certainty threshold, Meta reportedly charges a premium fee to allow the advertisements to remain active. This policy suggests a preference for revenue over absolute consumer safety, according to the legal filing.
Meta’s sophisticated artificial intelligence tools are accused of actively targeting "vulnerable consumers" with highly deceptive content. The reported scams include fraudulent financial products, cryptocurrency schemes, and celebrity impersonations designed to solicit monetary contributions. Furthermore, many ads promote purported cures for incurable diseases or ineffective nutritional supplements. The lawsuit argues that the company’s AI helps these bad actors identify and reach individuals most likely to fall for such deceptive tactics.
