FACEBOOK reported on Aug 15, 2018, that its efforts to deal with misinformation, fake news, and hate speech in Burma (Myanmar) have been slow and inadequate:
“The ethnic violence in Burma is horrific and we have been too slow to prevent misinformation and hate on Facebook.”
It cited some technical issues and other reasons why it failed to decisively address misinformation in Burma:
“The rate at which bad content is reported in Burmese, whether it’s hate speech or misinformation, is low. This is due to challenges with our reporting tools, technical issues with font display and a lack of familiarity with our policies.”
During his appearance before the United States Senate in April 2018, Facebook Chief Executive Officer Mark Zuckerberg boasted about his company’s progress in resolving the spread of hate speech in countries like Burma.
But six civil society groups signed a letter disproving Zuckerberg’s claim while highlighting the “inherent flaws” in Facebook’s ability to respond to emergencies. Zuckerberg was quick to apologise and vowed to do more to stop groups from using Facebook to promote religious violence and discrimination in Burma.
Facebook usage has surged in Burma over the past several years but this has also led to the widespread dissemination of fake news, hate speech, and other forms of misinformation targeting the country’s Muslim minority, especially the stateless Rohingya population.
Hardline Buddhist groups were accused of fomenting hatred and bigotry against the Rohingya which led to violent clashes, displacement of Muslim residents in the Rakhine State, and the intensification of online persecution against minorities.
The government of Burma refuses to recognise the Rohingya as one of the country’s ethnic groups and considers them illegal immigrants.
Even before Zuckerberg’s testimony in the United States Senate, United Nations officials blamed Facebook for its failure to prevent hate speech in Burma.
Marzuki Darusman, chairperson of the Independent International Fact-Finding Mission on Burma reported on March 12:
“[H]ate speech and incitement to violence on social media is rampant, particularly on Facebook. To a large extent, it goes unchecked.”
Yanghee Lee, the Special Rapporteur on human rights in Burma, told members of the 37th session of the Human Rights Council:
“[T]he level of hate speech, particularly on social media, has a stifling impact on asserting sensitive and unpopular views.”
“In the second quarter of 2018, we proactively identified about 52 percent of the content we removed for hate speech in Burma.
“As of this June, we had over 60 Burma language experts reviewing content and we will have at least 100 by the end of this year.
“We proactively identified posts that indicated a threat of credible violence in Burma. We removed the posts and flagged them to civil society groups to ensure that they were aware of potential violence.”
The Facebook update was issued a day after Reuters published a special feature about the ‘meager’ resources allotted by the tech company to resolve complaints relating to hate speech in Burma. Reuters also identified around 1,000 posts with hate speech content which could still be accessed on Facebook during the first week of August.
Now that Facebook recognises the link between online hate speech and the violence inflicted on Burma’s minority groups, it remains to be seen to what extent the company’s actions will halt the dissemination of hateful content.
This, however, should embolden civil society groups and other human rights advocates to place greater pressure on Facebook and other digital platforms to prevent the publication and broadcasting of misinformation in Burma and around the world.
This article first appeared on Global Voices.