A man is silhouetted against a video screen with an Facebook logo as he poses with an Samsung S4 smartphone in this photo illustration taken in the central Bosnian town of Zenica, August 14, 2013 Reuters/Dado Ruvic

Facebook is working on a mechanism that automatically detects offensive content in its live video service. This comes as part of its effort to use artificial intelligence to monitor content, indicated Joaquin Candela, the company's director of applied machine learning.

Facebook was already working on automation to flag extremist video content, Reuters had reported in June. At present, the company is testing the same for use in Live Video, its video streaming service. But its policy and production team is yet to determine how to use AI responsibly.

Yann LeCun, Facebook’s director of artificial intelligence, said that technology can be used to filter content. It exists or can be developed. But the question is how to use it sensibly, writes Wall Street Journal, quoting Yann. He also said that Facebook was using artificial intelligence to monitor content. It is an algorithm designed to monitor violence and nudity in the company platforms.

In the past, Facebook used to rely on user reviews to remove offensive content. Its employees would check the reported posts against company policies. Decisions that required policy changes were reported to the top executives.

Automatic filtering mechanism is now a pressing need for the social networking giant. The company found itself embroiled in numerous controversies in this regard. It faced international backlash due to the spread of fake news during the highly emotional US election this year. The removal of an iconic Vietnam War photo due to nudity too proved to be a headache.

Facebook also said that it is using automation to verify reports of duplicate content and direct the flagged posts to respective subject matter experts. However, determining if a post is hateful needs context, the company adds.