Dave Farber
2018-10-20 10:49:23 UTC
Date: October 20, 2018 at 7:32:57 PM GMT+9
Subject: [Dewayne-Net] The Poison on Facebook and Twitter Is Still Spreading
The Poison on Facebook and Twitter Is Still Spreading
Social platforms have a responsibility to address misinformation as a systemic problem, instead of reacting to case after case.
By NYT Editorial Board
Oct 19 2018
<https://www.nytimes.com/2018/10/19/opinion/facebook-twitter-journalism-misinformation.html>
A network of Facebook troll accounts operated by the Myanmar military parrots hateful rhetoric against Rohingya Muslims. Viral misinformation runs rampant on WhatsApp in Brazil, even as marketing firms there buy databases of phone numbersin order to spam voters with right-wing messaging. Homegrown campaigns spread partisan lies in the United States.
The public knows about each of these incitements because of reporting by news organizations. Social media misinformation is becoming a newsroom beat in and of itself, as journalists find themselves acting as unpaid content moderators for these platforms.
Itâs not just reporters, either. Academic researchers and self-taught vigilantes alike scour through networks of misinformation on social media platforms, their findings prompting â or sometimes, failing to prompt â the takedown of propaganda.
Itâs the latest iteration of a journalistic cottage industry that started out by simply comparing and contrasting questionable moderation decisions â the censorship of a legitimate news article, perhaps, or an example of terrorist propaganda left untouched. Over time, the stakes have become greater and greater. Once upon a time, the big Facebook censorship controversy was the banning of female nipples in photos. That feels like a idyllic bygone era never to return.
The internet platforms will always make some mistakes, and itâs not fair to expect otherwise. And the task before Facebook, YouTube, Twitter, Instagram and others is admittedly herculean. No one can screen everything in the fire hose of content produced by users. Even if a platform makes the right call on 99 percent of its content, the remaining 1 percent can still be millions upon millions of postings. The platforms are due some forgiveness in this respect.
Itâs increasingly clear, however, that at this stage of the internetâs evolution, content moderation can no longer be reduced to individual postings viewed in isolation and out of context. The problem is systemic, currently manifested in the form of coordinated campaigns both foreign and homegrown. While Facebook and Twitter have been making strides toward proactively staving off dubious influence campaigns, a tired old pattern is re-emerging â journalists and researchers find a problem, the platform reacts and the whole cycle begins anew. The merry-go-round spins yet again.
This week, a question from The New York Times prompted Facebook to take down a network of accounts linked to the Myanmar military. Although Facebook was already aware of the problem in general, the request for comment from The Times flagged specific instances of âseemingly independent entertainment, beauty and informational pagesâ that were tied to a military operation that sowed the internet with anti-Rohingya sentiment.
The week before, The Times found a number of suspicious pages spreading viral misinformation about Christine Blasey Ford, the woman who has accused Brett Kavanaugh of assault. After The Times showed Facebook some of those pages, the company said it had already been looking into the issue. Facebook took down the pages flagged by The Times, but similar pages that hadnât yet been shown to the company stayed up.
Itâs not just The Times, and itâs not just Facebook. Again and again, the act of reporting out a story gets reduced to outsourced content moderation.
[snip]
Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/
Twitter: https://twitter.com/wa8dzp
-------------------------------------------Subject: [Dewayne-Net] The Poison on Facebook and Twitter Is Still Spreading
The Poison on Facebook and Twitter Is Still Spreading
Social platforms have a responsibility to address misinformation as a systemic problem, instead of reacting to case after case.
By NYT Editorial Board
Oct 19 2018
<https://www.nytimes.com/2018/10/19/opinion/facebook-twitter-journalism-misinformation.html>
A network of Facebook troll accounts operated by the Myanmar military parrots hateful rhetoric against Rohingya Muslims. Viral misinformation runs rampant on WhatsApp in Brazil, even as marketing firms there buy databases of phone numbersin order to spam voters with right-wing messaging. Homegrown campaigns spread partisan lies in the United States.
The public knows about each of these incitements because of reporting by news organizations. Social media misinformation is becoming a newsroom beat in and of itself, as journalists find themselves acting as unpaid content moderators for these platforms.
Itâs not just reporters, either. Academic researchers and self-taught vigilantes alike scour through networks of misinformation on social media platforms, their findings prompting â or sometimes, failing to prompt â the takedown of propaganda.
Itâs the latest iteration of a journalistic cottage industry that started out by simply comparing and contrasting questionable moderation decisions â the censorship of a legitimate news article, perhaps, or an example of terrorist propaganda left untouched. Over time, the stakes have become greater and greater. Once upon a time, the big Facebook censorship controversy was the banning of female nipples in photos. That feels like a idyllic bygone era never to return.
The internet platforms will always make some mistakes, and itâs not fair to expect otherwise. And the task before Facebook, YouTube, Twitter, Instagram and others is admittedly herculean. No one can screen everything in the fire hose of content produced by users. Even if a platform makes the right call on 99 percent of its content, the remaining 1 percent can still be millions upon millions of postings. The platforms are due some forgiveness in this respect.
Itâs increasingly clear, however, that at this stage of the internetâs evolution, content moderation can no longer be reduced to individual postings viewed in isolation and out of context. The problem is systemic, currently manifested in the form of coordinated campaigns both foreign and homegrown. While Facebook and Twitter have been making strides toward proactively staving off dubious influence campaigns, a tired old pattern is re-emerging â journalists and researchers find a problem, the platform reacts and the whole cycle begins anew. The merry-go-round spins yet again.
This week, a question from The New York Times prompted Facebook to take down a network of accounts linked to the Myanmar military. Although Facebook was already aware of the problem in general, the request for comment from The Times flagged specific instances of âseemingly independent entertainment, beauty and informational pagesâ that were tied to a military operation that sowed the internet with anti-Rohingya sentiment.
The week before, The Times found a number of suspicious pages spreading viral misinformation about Christine Blasey Ford, the woman who has accused Brett Kavanaugh of assault. After The Times showed Facebook some of those pages, the company said it had already been looking into the issue. Facebook took down the pages flagged by The Times, but similar pages that hadnât yet been shown to the company stayed up.
Itâs not just The Times, and itâs not just Facebook. Again and again, the act of reporting out a story gets reduced to outsourced content moderation.
[snip]
Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/
Twitter: https://twitter.com/wa8dzp
Archives: https://www.listbox.com/member/archive/247/=now
Modify Your Subscription: https://www.listbox.com/member/?member_id=26461375
Unsubscribe Now: https://www.listbox.com/unsubscribe/?member_id=26461375&id_secret=26461375-c2b8a462&post_id=20181020064933:D24E053E-D455-11E8-B734-C7A29343588F
Powered by Listbox: https://www.listbox.com