The Daunting Task of Content Moderation
From Facebook to YouTube, social media platforms are routinely inundated with inappropriate and fabricated content. Fraudulent claims seem endless, and the monitoring of content, overwhelming. And although the big-tech companies are working on innovative ways to address these concerns, misinformation continues to spread like a cancer throughout the internet.
In order to meet this challenge head on, Facebook in particular, has been stepping up its content moderation efforts.
But separating fact from fiction is a lot more complicated than it sounds. Content moderators must review hours of online videos, audio, images, and text, and then decide the fate of each piece of content — ignore, restrict, flag, or delete? To say that content moderators face a daunting task is quite the understatement. And while this topic clearly deserves continued and increased exposure, we’re taking a closer look at what content moderation means for the language services industry.
Content Moderation for a Growing Global Audience
Moderating multilingual content for a global audience poses unique challenges. How do companies that serve clients on a global scale ensure that their published content is void of any culturally offensive or misleading information without compromising authenticity? It’s already stressful enough for companies to decide what to publish on their own website, let alone deciding what information to share on their various social media platforms. And since the majority of big-tech companies are US-based, content moderators are tasked with the added responsibility of considering the United States’ First Amendment right to free speech.
Moderating incoming content for a growing global audience while preserving an individual’s right to self expression is no easy feat. Facebook is perhaps the American poster child for a company caught in this conundrum between censorship and freedom of expression1:
“People who don’t like the way that something was cut…will kind of argue that…it did not reflect the true intent or was misinformation. But we exist in a society…where we value and cherish free expression.” — Mark Zuckerberg
The internet has certainly made it much easier for the world to connect, but in so doing, it has complicated the very nature of communication. While no one would dispute that the World Wide Web is a convenient way to conduct remote business (email, social media platforms, websites, video calls, and more), it is also a speedway for miscommunication and misinformation whether intentional or not.
Cultural Sensitivity Training
So, what is the answer? How do these big techs that are serving the globe effectively manage all of the incoming multilingual content? And how do companies ensure that their published content is free from offensive or misleading information at home and overseas? Some say it boils down to ongoing multilingual and multicultural training. Content moderators well-versed in cultural differences and linguistic nuances are in a better position to determine the appropriateness of published content. After all, each global region has its own unique history (along with its own set of norms and customs), and it is this unique experience that dictates a culture’s sense of right from wrong. However, “where” the content is consumed also plays a significant role.
The same content consumed by a culture within the United States might be perceived very differently when consumed by the same culture in another part of the globe. A content moderator must therefore, develop an intimate understanding of how in-country cultural and political norms differ from the norms within the greater diaspora. And although this is highly complex, it can be accomplished with careful planning.
One of the most important steps companies can take during the linguist selection process involves the careful screening of candidates. When companies hire moderators who have only been away from their countries of origin for less than three years, they increase the likelihood that these professionals have maintained a very intimate understanding of the norms and values of their country of birth. It is also important for companies to develop a coherent and culturally-appropriate set of standards, and apply these standards across the board. Each moderator’s approach toward content monitoring should then match that of the geo-political and cultural guidelines used by the respective client marketing teams.
Since Facebook is currently the largest social media platform in the world (with YouTube nipping at its toes), we thought we’d examine how this social media giant handles content moderation for its global audience.
Facebook as a Case Study
According to the Internet World Atlas, as of June, 2017, Facebook now enjoys a presence (and in some regions, a formidable one) in virtually every corner of the globe.
Facebook welcomes over 2 billion people around the world and receives upwards of 4 petabytes (PB) of data each day. This vast amount of global traffic equates to increased responsibility, but is Facebook doing enough to monitor its content? Some don’t think so. In order to address this growing concern and satisfy the nay-sayers, Facebook has developed its Community Standards (strict guidelines that define what content is – and is not – permissible).
But there seems to be a linguistic roadblock when it comes to these standards. Since they only set the parameters for permissible content, the standards only work if Facebook users can read and understand them. The idea would then be for users to flag inappropriate content they come across. Seems reasonable, except for the fact that the standards are only offered in a fraction of the languages2 that Facebook supports.
Complicating matters even further, the human content moderators employed by Facebook are currently unable to effectively monitor content in the more than 100 languages the platform now supports. And although Facebook has adopted a machine “learning” model to work with human moderators (and thereby streamline the overall content moderation process), the AI used to detect inappropriate content is only capable of processing two or three dozen of the languages Facebook supports. How then, can Facebook claim to effectively monitor its content if their human moderators and algorithms cannot understand a large number of Facebook’s supported languages? Something seems amiss.
The Role of Artificial Intelligence in Multilingual Content Moderation — Is it Falling Short?
Most social media platforms now employ a variety of ways to monitor their online content. By and large, “human” moderators follow company guidelines that apply a type of binary decision process. In other words, most companies train moderators to make choices to keep or flag content based on the presence or absence of specific pieces of content. Arguably then, algorithms could be built to follow these same processes, but how reliable is this technology? Although automation can help to detect inappropriate content where contextual knowledge is less of an issue, there are limitations that companies need to keep in mind.
Should AI be used to moderate content for language services?
Technology has definitely come a long way (speech recognition, optical character recognition, image classification, natural language processing, etc.), but it is still nowhere near capable of replacing a human content moderator. Not only can AI be easily duped, but machines lack the ability to apply subjectivity and are unable to consider societal context before making a decision to keep or remove content.
Content moderation plays a uniquely critical role for businesses with a growing multilingual and multicultural client base. If inappropriate content in the source language sneaks past the algorithms on any of the company’s social media platforms, it could likely find its way into the machine translation (MT) data. If that happens, not only will there be inappropriate content published in the source language, but it will now be published in one or more target languages — did someone say clean up on aisle five?
We are living in a steadily growing global market. Now more than ever before, a company’s content needs to be translated, its message needs to be interpreted, and its brand and voice, transcreated. In order to meet this challenge, companies are beginning to invest in multilingual content moderation for all of their social media platforms. In fact, more and more businesses are now relying on the expertise of language services providers to handle their multilingual content monitoring.
Multilingual Content Moderation
From doctored online magazine covers3 that work to discredit reputable companies (and mislead the general public about critical matters) to attempts at sensationalizing horrific incidents, there will always be forces of evil that require serious internet policing. But as companies turn to AI to solve this challenge, they must also be warned of machine algorithm failures. If companies rely too heavily on AI for their content moderation needs, they risk significant controversy and stand to damage business relationships.
So, how do companies reconcile the growing demand for multilingual content with responsible content moderation? It really comes down to investing in a professional team of human multilingual content moderators coupled with advanced technology — and this is where Akorbi takes center stage. Akorbi is uniquely positioned to offer professional multilingual content moderators coupled with sophisticated AI.
Let’s face it, content moderation is a necessary evil, but one that we cannot afford to ignore. From political interference to discrimination, violent rhetoric, and many other online scandals, companies need to take significant measures when it comes to monitoring their content — and this is where Akorbi takes center stage.
Akorbi’s subject matter experts (SMEs) help tailor solutions that meet our clients’ unique needs. With Akorbi’s robust team of content moderators coupled with our advanced technology, you’ll be one step ahead of the game, catching inappropriate content before it even hits the internet. Protect your brand and voice, and strengthen your reputation with Akorbi’s multilingual content moderation best practices. Give us a call today for a free consultation.
Created in partnership with GIM Writing Services.
REFERENCE LIST: 1 Newton, C. (2019, June 28). Facebook's Supreme Court for content moderation is coming into focus. Retrieved from https://www.theverge.com/interface/2019/6/28/18761357/facebook-independent-oversight-board-report-zuckerberg 2 Fick, M. (2019, April 23). Facebook's flood of languages leave it struggling to monitor content. Retrieved from https://www.reuters.com/article/us-facebook-languages-insight/facebooks-flood-of-languages-leave-it-struggling-to-monitor-content-idUSKCN1RZ0DW 3 @bryanrwalsh, B. W. (2013, June 06). Sorry, a TIME Magazine Cover Did Not Predict a Coming Ice Age. Retrieved from http://science.time.com/2013/06/06/sorry-a-time-magazine-cover-did-not-predict-a-coming-ice-age/