A tablet screen displays a portrait of Jeffrey Epstein next to a U.S. Department of Justice website page titled Epstein Library, February 11, 2026.
Véronique Tournier | AFP | getty images
A victim of notorious sexual predator Jeffrey Epstein filed a class action lawsuit against the Trump administration on behalf of herself and other survivors and Google For allegedly wrongly disclosing and publishing personal information about them.
suit, Filed on Thursday US District Court for the Northern District of CaliforniaWhere is Google’s headquarters claimed? Justice Department Nearly 100 Epstein survivors were “outed” between late 2025 and early 2026, and even after the government admitted error and withdrew the information, “online entities like Google continue to republish it, refusing victims’ pleas to remove it.”
With regard to Google, the lawsuit states that the company’s main search engine and its artificial intelligence summary feature called AI Mode were responsible for publishing victims’ personal information.
“Survivors now face renewed trauma,” the lawsuit says. “Strangers call them, email them, threaten their physical safety, and accuse them of conspiring with Epstein, when in fact they are Epstein’s victims.”
The complaint was filed by an Epstein victim who used the pseudonym Jane Doe.
After months of pressure, the DOJ released more than 3 million earlier this year Additional Pages Documents related to Epstein, including images and videos. In August 2019, weeks after being arrested on federal child sex trafficking charges, Epstein killed himself in a New York City jail.
Taking on Google, plaintiffs are testing whether a key safety net for Internet companies and social media sites has its limits. Section 230 of the Communications Decency Act regulates Internet speech and has long allowed major platforms in the US to avoid liability for content displayed on their websites and apps.
With the explosion of AI-generated content and new controversies emerging regarding the publication of non-consensual sexual images, including so-called deepfake porn, internet giants are facing a new challenge in protecting their territory. Earlier this month, Google was sued in a wrongful death case by the father of a 36-year-old man who alleged that the company’s Gemini chatbot convinced his son to attempt a “mass casualty attack” and ultimately commit suicide.
The lawsuit from Epstein’s survivors accuses Google of “knowingly” promoting harassment by hosting information about victims, through its design, and says its AI Mode feature “is not a neutral search index.” The complaint comes after two jury verdicts this week — both against Meta and one involving Google’s YouTube — that concluded online platforms are failing to adequately police their sites for content that could harm real lives.
New Mexico Attorney General Raul Torrez, who led his state’s case against Meta, told CNBC this week that “there is a clear possibility that these cases will lead Congress to re-examine Section 230 and, if not eliminate it, then dramatically modify it.”
The latest lawsuit claims that Google’s AI-generated content reveals personal information about victims. It said Google’s AI mode responded to queries seeking such details.
The complaint alleges that the government has failed in the past to force tech platforms to remove content, allowing victims’ information to be exposed.
“As part of this response, generated repeatedly across multiple platforms and on different devices, Google’s AI mode included Plaintiff’s full name, displayed his full email address, and generated a hypertext link that allowed anyone to send an email directly to Plaintiff with the click of a button,” the lawsuit says.
Representatives for Google and the Trump administration did not immediately respond to requests for comment.
— CNBC’s Dan Mangan and Jonathan Vanian contributed to this report.
Watch: Goldman Sachs’ top lawyer resigns
