X
Alexander Shatov / Unsplash

The eSafety commissioner has warned Elon Musk's X about the availability and accessibility of child abuse material on the platform.

It can be recalled the X's Grok came under fire earlier this year for generating illegal deepfake images that have sexually depicted women and children.

eSafety Says Child Abuse Material is 'Particularly Systemic' on X

The revelations were made in a letter that Guardian Australia was able to obtain under freedom of information laws.

In the letter, eSafety's general manager of regulatory operations, Heidi Snell, said that "the availability of CSEM [child sexual exploitation material] continues to appear particularly systemic on X".

"eSafety has not identified CSEM to be as readily accessible on any other mainstream service," Snell added.

While the letter acknowledges X's efforts to reduce the number of such material on the platform, Snell said that they were still rather prevalent as seen in hashtags advertising CSEM.

"We are concerned that apparently innocuous hashtags appear to be coopted to advertise CSEM, particularly when used together," Snell said.

The Guardian notes in its report that it was unable to retrieve X's response to eSafety's letter.

Grok's Deepfake Scandal

It can be recalled that X was embroiled in a scandal back in January when Grok began to generate sexualised content.

At that time, eSafety released a statement, saying that it "remains concerned about the use of the generative AI system Grok on X to generate content that may sexualise or exploit people, particularly children."

While eSafety acknowledged that it was only getting a small number of reports on the issue, it already wrote to X regarding the matter.

"Additional mandatory codes will commence on 9 March 2026, which create new obligations for AI services, among others, to limit children's access to sexually explicit content, as well as violent material and themes related to self-harm and suicide," eSafety added in its statement.