Moderation in Moderation: Exploring the Ethics Around Social Media Moderation
Some content will always have to be moderated. No one, for instance, would argue against a marketplace like eBay moderating what it allows users to put up for sale, particularly where some auctions have breached laws or generally accepted social norms (e.g. human or organ trafficking). As I discussed in my
However, content moderation is not so cut and dry. Moderators will still be left to deal with content that is more ambiguous, treading the fine line between protecting free speech but not offering offensive, potentially harmful content on a platform.
The Dilemma: What happens when we don’t moderate?
In 2017, 71% of internet users used social networks, and that number is only expected to grow. With 2.2 billion users, Facebook alone has more people than any single country’s population. Therefore, the power these platforms yield in deciding what content is removed or approved is significant. While most users are completely unaware that their feeds are being moderated or when content is removed, many would agree that removing offensive and/or illegal (e.g. child pornography or trafficking) content is necessary.
However, what about when the content that remains is untrue or misleading and begins to go viral? While some “conspiracy theories” may be innocuous and harmless, much misinformation has the potential to not only influence elections but shape socio-political events.
In the much-discussed case of Rohingya Muslims of Myanmar for instance, Facebook has been blamed for facilitating the rage against this minority population (which has resulted in mass violence, abuse and a refugee crisis), after allowing “posts that range from recirculated news articles coming from pro-government outlets, to misrepresented or faked photos and anti-Rohingya cartoons” (CNN) to remain on the site.
This is not the first such happening. Facebook is also being blamed for spikes of violence across the developing world in the past 4 years, including riots and mob executions, all linked to posts by religious and political extremists that weren’t taken down.
Although Facebook does have a written terms and guidelines around posting, most users have never read them and don’t understand the process for removing content. That leaves human moderators with the task of determining what, within the guidelines they’ve been given, can or cannot be posted. For the human moderators that help Facebook, Google, and Twitter, there are up to 30 different categories of content for them to monitor and different teams that are “experts” in each. In The Cleaners, the terrorism moderation experts, for instance, need to memorize 37 terrorist organizations, including the names and flags.
Power and Choice: What is okay to post and what is not? Who decides?
Free speech has long been a cornerstone in the tech world, so the question of how far moderation should go remains hotly debated. Historically, social platforms have allowed users to post whatever they want, but with the reach of these platforms today, this has perhaps had a lot more influence on the world than we realized and in the last few years.
Therefore, these platforms’ sites are now being pushed to take more responsibility or be held accountable for what gets published. Just recently, Facebook announced that they would be removing misinformation that could cause harm. Along with them, Apple, Google and Spotify have all removed content from Alex Jones, labelled by the New York Times as “the most notorious internet conspiracy theorist”, and his website “Infowars.” “We have a broader responsibility to not just reduce that type of content but remove it,” Tessa Lyons, Facebook’s product manager said. But what are the parameters for that responsibility?
Most sites leave users to report offensive or harmful posts, but even when users do, effective moderation suffers from moderators not always having the context to know whether a post is harmful. In 2016, Facebook received worldwide criticism after removing the historic “Napalm Girl” photograph that showed a naked nine-year-old girl running from a napalm attack that caused severe burns to her back and arms during the Vietnam War. Facebook’s reason for removal lay in its Community Standards, with them releasing a notice that stated “any photographs of people displaying fully nude genitalia or buttocks, or fully nude female breast, will be removed”, noting that in some countries, any pictures of naked children qualified as child pornography.
Among many things, the site was accused of censoring the war and abusing their unprecedented power. Forced to relent, Facebook stated that after reviewing how they had applied their Community Standards in the case, they recognized “the history and global importance of this image in documenting a particular moment in time.” Not utilizing context when moderating “nudity” had previously left Facebook to defend itself and ultimately clarify this same policy regarding breastfeeding. “It is very hard to consistently make the right call on every photo that may or may not contain nudity that is reported to us,” said a spokesperson at the time. “Particularly when there are billions of photos and pieces of content being shared on Facebook every day.”
And much of those billions of posts fall into the notoriously ambiguous danger zone that is politics. Choice and conflict are hallmarks of democracy, but what about when those choices aren’t informed by fact? The 2016 election saw Facebook once again embroiled in a storm of controversy, with it being taken to task for how disinformation posted on its site possibly influenced the 2016 U.S. presidential election.
Both Facebook and Twitter are said to be spreading “fake news” much faster than the truth. A recent study from The MIT Sloan School of Management noted that many of the false news stories spread on Twitter evoked strong emotional responses. Deb Roy, a professor at the university and co-author of the study asks a poignant question: "How do you get a few billion people to stop for a moment and reflect before they hit the retweet or the share button, especially when they have an emotional response to what they've just seen?"
As midterm elections approach, many tech companies are trying to find ways to curb the spread of misinformation. Facebook is now rating users’ trustworthiness when they report posts (as many people report posts they disagree with versus what is actually false news) and picking and promoting what news users see by pushing more “high-quality sources.” This, it seems, is to allow users to make more informed decisions.
However, this attempt to stop the spread of false information is not without detractors. For one, by becoming more hands-on in their approach towards what people can and cannot share online, the influence of social platforms on our lives may actually strengthen. Furthermore, some critics argue that there’s a risk for political bias, with both sides of the aisle arguing that companies like Facebook have enabled the other. Who their users consider to be high-quality news sources may be a matter of political affiliation.
Overall, the issue of who should decide what we can and cannot see is a complicated issue, and while social platforms strive to find the best solutions, the truth is they will never be able to satisfy everyone.
How much moderation do you think is okay? I’d love to hear your thoughts.
相關推薦
Moderation in Moderation: Exploring the Ethics Around Social Media Moderation
Some content will always have to be moderated. No one, for instance, would argue against a marketplace like eBay moderating what it allows users to put up
Are You Making The Most Of Social Media Web Analytics?
Most of us use it personally, and, in many cases, on our employer's behalf as we run a full content strategy plan to best market the product or business we
Radar Guidance Technology and the Solution of Social Problems in China
Radar Guidance Technology and the Solution of Social Problems in China Radar Guidance Technology of Guiding Missile to Target by Radar Guidance
Why AI Is the Next Frontier in Weaponized Social Media
When P.W. Singer set out to write a book about military use of social media in 2013, he couldn't have known exactly what kind of rabbit hole he was enterin
Exploring the Titanic in Virtual Reality
Exploring the Titanic in Virtual RealityA new game lets you dive deep into the infamous ship’s history.The golden rule for creating compelling VR content i
Let’s clear up the confusion around the slice( ), splice( ), & split( ) methods in JavaScript
JavaScript built-in methods help us a lot while programming, once we understand them correctly. I would like to explain three of them in this article: the
Pupil's brain recognizes the perfect teacher: Social brain lights up in juvenile songbirds that find a singing mentor
Young male zebra finches must learn to copy the song of an adult tutor in order to ultimately attract a mate. Researchers already knew that juveniles don'
Why social media platforms and the Harry Potter universe have more in common than you think
Why social media platforms and the Harry Potter universe have more in common than you thinkThere’s a particular scene in the first Harry Potter film which
How to Prove the ROI of Computer Vision Moderation
While other forms of AI are still in their infancy, one field of AI is already in practical use by businesses: computer vision. Computer vision (CV) has al
Measuring the Social Media Popularity of Pages with DEA in JAVA
In the previous article we have dis
[Python] Create a minimal website in Python using the Flask Microframework
() turn ini pass work def col out code How to install Flask Use Flask to create a minimal website Build routes in Flask to respond to web
windows10下git報錯warning: LF will be replaced by CRLF in readme.txt. The file will have its original line endings in your working directory.
init config code 回車 git init col ngs 使用 warn window10下使用git時 報錯如下: $ git add readme.txtwarning: LF will be replaced by CRLF in readme.txt
ios之error: The sandbox is not in sync with the Podfile.lock.
當我們編譯ios工程時,有時會遇到這個報錯。此時,關閉當前的工作空間,刪除以前的.xcworkspace檔案,然後執行pod install命令,install之後,重新開啟專案,clean並build專案,問題解決。 如果遇到pod install命令執行失敗,可以參考連線https://
collatz number,迴文數,love,map函式,漢若塔,佛祖鎮樓,read a word by random in 5s,the star of sand glass
1 collatz number def collatz(num): '''collatz number ''' if num % 2 == 0: n = num / 2 return n elif num % 2 == 1:
question 3 of Nowcode : write a array and return the object in it of the reverse direction
function printListFromTailToHead(head) { var curObj = head; //curObj is the pointer that points the head var Arr = []; while (curOb
How to Use the Facebook Budget Optimization Tool for Improved Results : Social Media Examiner
Wondering how to allocate your budget to reach the most effective Facebook audiences? Facebook's Budget Optimization tool uses an algorithm to automaticall
I went in search of the 'perfect face'
As the saying goes: "Beauty is in the eye of the beholder." But this egalitarian sentiment is easy to forget in our image-conscious age of photoshopped soc
We Need To Examine The Ethics And Governance Of Artificial Intelligence
Growing up, one of my favorite movies was Steven Spielberg's Minority Report. I was fascinated by the idea that a crime could be prevented before it occurr
Artificial Intelligence (AI) in Social Media Market 2018
aitopics.org uses cookies to deliver the best possible experience. By continuing to use this site, you consent to the use of cookies. Learn more » I und
The Ethics of Digitally Networked Beings
“Eudaimonia as elucidated by Aristotle is a practice that defines human well-being as the highest virtue for a society. Translated roughly as “flourishing,