Facebook Pixel Code

In recent months, a whistleblower who was previously employed by Facebook revealed that the company was often completely aware that content such as eating disorder posts and racist messages were being posted on this site. 

This ignited fierce debate between people who believe that such content should be acknowledged as harmful and ought to be taken off the site, as it has been demonstrated in the case of eating disorder posts that they encourage body image issues in children and teens. 

On the other side are people who believe that, upsetting the content may be, censoring it would be a violation of users’ rights to free speech, and that Facebook should not feel obliged to police the free expression of its users. 

There is a wide-ranging amount of content that is under question here, but a few markers of harmful content is anything that incites or implies violence, misinformation or negative behavior encouragement/reinforcement, prejudicial or discriminatory posts—in short, if it is something that we typically associate with morally or ethically bad conduct, then it can be considered harmful content.  

After many months of PR damage-control, a series of advertisements focused on showing the faces behind Facebook, and even a name change that has still not quite been accepted by everyone—it may take a few years before we are calling this company and its social media platform Meta—, Facebook/Meta has come forward with an announcement that more or less shows where it stands on the content regulation issues. 

Ultimately, Facebook has chosen to meet the demands of the side that believes that harmful content should be scrubbed from, rather than tolerated on, the social media platform. Its official announcement (which is discussed in this article) includes the following words: 

“Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it. But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content. …This new AI system uses a method called ‘few-shot learning,’ in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks. If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.”

Some people on either side of the debate may be upset that this method of machine learning is being used, or even that AI is being used at all. (Those who are upset with the implementation of AI to assist in the task of content regulation ought to consider a plausible alternative—even a huge team of human content regulators could hardly keep up with the massive amount of posts that are made every second). 

When it comes to the method itself, some may fear that the few-shot method could potentially let many posts slip through the cracks, since it has not been adequately trained to accurately detect the kinds of harmful content that it should be looking for. Instead, it is going off of self-administered (though overseen by Facebook) rules for detecting harmful content, and what comprises harmful content. 

This could be problematic for the simple reason that the agent has not had “several months to collect and label thousands, if not millions, of examples necessary to train [Facebook’s] AI system to spot a new type of content,” which is a recipe for slip-ups that will upset both the pro- and anti-censorship parties. Surely there will be situations where an innocent post is reported and deleted and a guilty post is overlooked. 

However, there are a couple reasons that this method could be justified. The first has already been stated, which is that the urgency many feel about the issue could not be relieved if Facebook took its time to train an AI agent on the millions of necessary examples. For right now, an agent that purports to do a lot with a little, albeit with a lower rate of accuracy, will have to do. 

The second is that the nature of harmful content is indeed constantly evolving, which was also acknowledged in the announcement. It is difficult to tell what exact issues and content will emerge in the coming years, and a current-day agent cannot predict what coded messages and future-specific topics and keywords will denote harmful content. It needs to learn on the fly, and in this way, it is good that the agent being employed has the capacity to generalize the task of content regulation. 

You can celebrate or bemoan the fact that the AI agent is already live on Facebook, and has brought a marked decrease of hate speech on the site. Whether you think the few-shot learning method is preferable or not, its demonstrated effectiveness ought to bring some relief to those who support content regulation.