X will demonetize users who post AI-generated videos of war (but not other kinds of disinformation)
On Tuesday Nikita Bier, head of product at X, fka Twitter, announced that users who post AI-generated videos of an armed conflict without disclosing they were AI-generated will suspended from the site’s revenue-sharing program for 90 days, with repeated offenses leading to a permanent suspension from the program:
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.
During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
— Nikita Bier (@nikitabier) March 3, 2026
As Wired reported, X has been filled with disinformation since the United States and Israel first started bombing Iran. While there are AI-generated videos in the mix, that’s not all. According to Wired:
In some cases, alleged video footage of the attack shared in posts on X are actually months or years old. In several posts, video footage of apparent attacks have been attributed to incorrect locations. A number of images shared on X appear to be altered or generated with AI. Other posts attempt to pass off video game footage as scenes from the conflict.
Bier’s post, notably, doesn’t say anything about what happens if users post misleading videos, like the video game footage, without disclosing the source. It’s worth noting that X’s own AI tool, Grok, is hilariously bad at identifying AI-generated videos:
here, in response to someone asking for verification, is Grok claiming that an obviously fake video is not fake — “real civilian footage” — we are in such deep shit https://t.co/9We8rZ8UQu
— Ilya Lozovsky (@ichbinilya) March 3, 2026
Verifying information takes hard, human work. For a look behind the scenes — and to keep up to date with what is and isn’t real — BBC Verify has created a dedicated feed for updates on the war in Iran.