X Will Suspend Creators Who Fail to Add AI Labels to Videos Depicting 'Armed Conflict,' War Content
X wants to properly label AI-generated video content depicting war and conflict.

X has revealed that it will suspend creators who fail to properly label AI-generated content on videos posted on the platform, particularly if they depict "armed conflict" or war-related content.
The policy change aims to properly label content generated by artificial intelligence and avoid spreading misinformation on the platform.
X to Suspend Creators Failing to Add AI Labels to Videos
X's head of product, Nikita Bier, has shared a new post that reveals the latest policy change on the platform's content moderation, which says that all creators who fail to label AI videos will be suspended.
According to Bier, the suspension applies to AI-generated videos depicting armed conflict or war-related content on the platform. The X executive said that the platform wants to "maintain the authenticity of content on Timeline," especially amidst global conflict.
"During times of war, it is critical that people have access to authentic information on the ground. With today's AI technologies, it is trivial to create content that can mislead people," said Bier.
Creators who post AI-generated videos without the proper labeling will face a 90-day suspension on their first offense, and subsequent violations will bring heftier fines, as well as possible removal from the program.
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.
— Nikita Bier (@nikitabier) March 3, 2026
During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
Only Applies to Creators Under the Revenue Program
Bier explained that this only applies to creators who are under the "Creator Revenue Sharing" program, and these are creators who are monetized from posting content on X.
The executive revealed that the AI-generated content could be flagged by the likes of posts receiving a Community Note, which clarifies that it contains AI-made aspects. Bier also revealed that they will actively monitor posts and aim to detect metadata from generative AI tools.
It is known that X recently faced massive complaints over Grok AI's deepfake scandal earlier this year as it allowed users to generate AI photos without limitations. This led to a fiasco where explicit content spread throughout the platform and sexualized many users.
Originally published on Tech Times
ⓒ {{Year}} TECHTIMES.com All rights reserved. Do not reproduce without permission.





















