So Facebook sold a bunch of political ads during the 2016 presidential campaign to a Russian propaganda outfit. Oh, and they won’t tell us which ads they sold. Also, it turns out they have been helping “jew-haters” find their market. And as the icing on the cake, they were helping that crowd connect with the ammosexuals.
I have historically been skeptical of the notion that platforms should be held responsible for the actions of the users on that platform. Should the cops be able to come after an ISP because one its users has been using her/his internet connection to do unsavory stuff? I tend to think not.
Increasingly, however, companies like Facebook, Twitter, and Uber have been adopting the “But we’re a platform!” response to criticisms about their business conduct and their disregard for abuse happening within and via their products. I find these criticisms to be compelling (and the responses by Facebooks et al. to be less than convincing), and it has gotten me to thinking about why that might be.
What makes the Facebook situation different, at least to my way of thinking, is that these are not a small handful of fringe users doing nefarious stuff without Facebook’s knowledge. Rather, Facebook made money by selling ads to these creeps, and by trying to assist them in expanding the market for their hate and violence. It is a core part of Facebook’s business to keep track of every single possible bit of data about its users and their behavior on the platform—that is how Facebook makes money. For the company to then turn around and declares themselves shocked, SHOCKED I TELL YOU to find gambling going on this establishment is a bit rich.
Facebook and its ilk want to have their cake and eat it too. On the one hand, they build systems and networks that are designed and instrumented to track and analyze in excruciating detail the specific actions of their users in order to monetize that data. That is their business model.
On the other hand, they want to be able to claim that what individual users do on their platforms is none of their business. That claim would be laughable on its face, so they fall back on their "It's the algorithms, not us!" defense.
Sorry guys—that doesn't cut it. You built the platform, and you wrote the algorithms. You own it. The same goes for the rest of the companies trying this defense. The fundamental feature of your “platforms” and business models is that you know exactly what your users are doing. If the algorithms are routinely catering to Nazis, then you’ve got a problem that you need to fix.