Instagram Pushes Antisemitic Videos to Hundreds of Millions of Users, Report Finds
Silhouettes of mobile users are seen next to a screen projection of the Instagram logo in this picture illustration taken March 28, 2018. Photo: REUTERS/Dado Ruvic/Illustration
Instagram actively recommends bigoted content to its users, according to newly published research from a leading antisemitism watchdog group.
The revelation followed two high-profile losses this week in lawsuits that charged billionaire Mark Zuckerberg’s Meta, which owns Instagram, with failing to protect children on its social media platforms.
On Wednesday, the Combat Antisemitism Movement (CAM) published new findings from its Antisemitism Research Center (ARC). The report, “Engineered Exposure: How Antisemitic Content Is Pushed and Amplified to Millions Across Instagram,” focused on tracking 100 antisemitic posts during a 96-hour period which Instagram directly pushed into users’ accounts through its own recommendation system.
CAM’s researchers found that these posts provoked 5.3 million likes and 3.8 million shares, which analysts estimate reached as many as 280 million users.
“Among the most disturbing findings is that the ARC researchers identified AI-generated ‘rabbi’ personas that were fabricated to push antisemitic tropes while projecting false religious authority,” CAM said in a statement announcing the report.
One bogus rabbi account CAM uncovered had collected more than 1.4 million followers. The report described how an account called Rabbi Goldman “pushes antisemitic conspiracy theories, including allegations of Jewish control of the global financial system, to a large audience, with some videos getting more than five million views.”
ARC identified 11 other fake rabbis, bringing the total followers for such accounts up to 2.1 million. According to the researchers, “each presents a distinct persona and voice, yet all promote narratives portraying Jews as obsessed with money, playing to classical antisemitic stereotypes.”
The report also documented substantial linking of Jews with occult themes including references to demons, Satan, 666, Moloch, freemasonry, the Illuminati, and especially the ancient Canaanite storm god Baal. The slander against Jews as secretly worshipping a deity who demanded child sacrifice and rivaled the God of Israel in the Bible has manifested elsewhere on social media. Far-right podcaster Candace Owens has claimed that the Star of David has “ALWAYS [sic] been associated with Canaanite cults and Baal worship.”
An important component of this new research is that rather than investigators searching for hateful content, they relied solely on “the standard use of Instagram over four days, via content actively suggested by the platform’s recommendation systems.”
“This distinction demonstrates that exposure to these narratives does not require users to seek out extremist material,” the researchers explained. “Instead, the platform itself can act as a vector, introducing and amplifying such content through its own distribution mechanisms.”
Through providing examples of the content analyzed, the researchers showed how conspiracy theories transition into calls for violence. One video discussed in the report blamed “the Rothschilds” and central banks as guilty of causing all global crises including wars, diseases, and 9/11. The video then “escalates into explicit eliminationist rhetoric, calling for their eradication as a solution. It uses the Rothschild family as a proxy for Jews and frames them as a singular, malevolent force controlling world events.”
CAM CEO Sacha Roytman said the report provided evidence “of a broad systemic failure on the part of Instagram and Meta.”
“When a platform actively recommends content that dehumanizes Jews to mass audiences, we are no longer talking about a simple oversight or a mistake in the algorithmic design. We are talking about infrastructure that normalizes hatred at scale that must be addressed immediately,” he added.
Regarding potential motivations for what might have inspired Zuckerberg to allow for such a proliferation of hate, the report noted in its introduction that Meta had been “generating substantial advertising revenue from engagement with the content in question.”
In 2025, Meta’s revenue reached $200.966 billion, an increase of 22.17 percent from 2024, when revenue hit $164.501 billion, a 21.94 percent increase from 2023’s $134.9 billion, which in turn had grown 15.69 percent from 2022.
Bloomberg currently ranks Zuckerberg as the fifth wealthiest person on the planet, with an estimated net worth of $211 billion. Earlier this month, he purchased a $170 million mansion in South Florida’s Indian Creek, noted as the most expensive sale in Miami-Dade County and listed as “the largest residence ever created on Miami’s most exclusive island.”
On Wednesday, Meta laid off 700 employees, largely those affiliated with the failed Reality Labs division, which burned through $80 billion in pursuit of creating the virtual reality platform Horizon Worlds. The platform will shut down on June 15.
Roytman said that Meta “must take a hard look at how its algorithms are promoting antisemitic content and put real, transparent safeguards in place to stop it.”
Meta may have additional motivation now to level up the safety protocols on its platforms following back-to-back decisions in a pair of lawsuits this week which, legal analysts suspect, may have now opened the floodgates for thousands of similar cases around the country.
On Tuesday in Santa Fe, jurors found Meta liable and imposed a $375 million fine for failing to prevent minors’ exposure to harmful sexual content including online solicitations, human trafficking, and explicit imagery.
“Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew,” New Mexico Attorney General Raúl Torrez said in a statement following the verdict. “Today the jury joined families, educators, and child safety experts in saying enough is enough.”
Torrez vowed to go after Meta for more money and force changes to the platforms.
“New Mexico is proud to be the first state to hold Meta accountable in court for misleading parents, enabling child exploitation, and harming kids,” Torrez said. “In the next phase of this legal proceeding, we will seek additional financial penalties and court-mandated changes to Meta’s platforms that offer stronger protections for children.”
On Wednesday in Los Angeles, jurors found Meta and Alphabet (parent company of YouTube) liable for the addictive qualities of their platforms exacerbating the mental health problems of a young woman and awarded her $3 million in damages with $3 million more in punitive damages.
Omri Ben-Shahar, a law professor at the University of Chicago, told the Wall Street Journal that “what is new is the addiction element.” He warned “that could create a very broad liability. The notion of addiction, there is something very abstract about it.”
Meta and Alphabet both plan to appeal the ruling. Alphabet spokesman José Castañeda sought to distance the company from Meta (which jurors found more heavily liable at a 70-30 penalty ratio), saying “this case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site.”
Previous legal challenges to social media and online video companies for failing to prevent exposure to harmful content have usually failed due to longstanding legal interpretations of Section 230 of the 1996 Communications Decency Act, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This statute has prevented plaintiffs from suing a website’s host the way they would an individual committing slander or a publisher engaged in libel. The legal innovation which allowed for success in these cases was lawyers’ decision to focus not on the content itself but on the design of the products which intended to hold users captivated, glued to their phones for hours.
“They knew,” said Mark Lanier, the lawyer for the 20-year-old plaintiff in the addiction case. “They targeted the children.”