From a Buzzfeed article on the continued failure of platforms like Facebook, Twitter, and Google to stop the spread of fake news and conspiracy theories following tragic events:
By the time the Parkland school shooting occurred, the platforms had apologized for missteps during a national breaking news event three times in four months, in each instance promising to do better. But in their next opportunity to do better, again they failed. In the aftermath of the Parkland school shooting, journalists and researchers on Twitter were the first to spot dozens of hoaxes, trolls impersonating journalists, and viral Facebook posts and top "trending" YouTube posts smearing the victims and claiming they were crisis actors. In each instance, these individuals surfaced this content — most of which is a clear violation of the platforms' rules — well before YouTube, Facebook, and Twitter. The New York Times' Kevin Roose summed up the dynamic recently on Twitter noting, "Half the job of being a tech reporter in 2018 is doing pro bono content moderation for giant companies."
Among those who pay close attention to big technology platforms and misinformation, the frustration over the platforms’ repeated failures to do something that any remotely savvy news consumer can do with minimal effort is palpable: Despite countless articles, emails with links to violating content, and viral tweets, nothing changes. The tactics of YouTube shock jocks and Facebook conspiracy theorists hardly differ from those of their analog predecessors; crisis actor posts and videos have, for example, been a staple of peddled misinformation for years.
They could easily fix this problem by hiring and paying actual people that are halfway savvy about reading the news.
They don't want to do that, though. They want to keep trying to do it via algorithms, because that solution is scalable, whereas humans are not.
Additionally, the tech companies have marketed their algorithm-driven products as some sort of magic that will solve every problem. The media has swallowed this notion and propagated it, and a whole cottage industry of TEDTalk snake-oil salespeople and "futurists" has cropped up around it. If the industry turns around now and admits that humans are much better at this kind of stuff—and that the much-ballyhooed algorithms are not much more than "People who liked this also liked that"—the game is up.