Facebook Pixel Code

Facebook, d/b/a Meta, is front-and-center in the debate around content regulation. 

Recently, in case you missed it (if that is even possible for someone online), Facebook has ignited social media, and other media platforms, aflame after a former employee and current whistleblower revealed that the company failed to regulate and answer reports of hate speech and other content that could be deemed harmful, such as posts and accounts dedicated to encouraging eating disorders. 

The raging debate concerns issues such as what qualifies as hate speech, and what should and should not be protected under the First Amendment right to freedom of speech. 

Another side of the issue involves the question of whether users should have content regulated for them—in other words, whether they should be the ones protected from the speech of others, protected or not by the First Amendment. 

Schools and parents are certainly interested in regulating what children view online, and AI is right there to help them. 

Children, AI, and the Online Sphere

Here is one of those fun and troubling statistics that will keep you up at night: Some 95% of children 3 and up have internet access. 

And another: Over 70% of parents are, on a scale from “somewhat” to “very,” concerned that their children spend too much time looking at screens. Many have reached out to a doctor about this phenomenon. 

Much of this anxiety is warranted, since the widespread use of the Internet is still a fairly new thing. A convincing argument could be made that the advent of social media created a boom in online users, creating a mass appeal for being online that reached not only children but your typical person whose online time may have been limited to an hour or so a day, if that, in 2005. 

Nowadays, your average person with Internet access is pretty much sacrificing an entire day of the week online. 

As mentioned, this boom of online time is a new phenomenon, and the population growing up in it, i.e. the children 3 and up, 95% of whom have access to the internet, are likely to be the most affected by this lifestyle in the long run, although the adults who are close to or, God help them, above that 23-hour-a-week average can get a good dose of anxiety by tallying up how much of their life, behind and ahead of them, will have been 

All this to say that the parents who wish to regulate their children’s screen time do not have unfounded concerns, because the effects of so much screen time at such a young age, though studied short-term, has yet to be fully discovered in the long term. It is worth mentioning also that the Internet is a very different beast from TV and movies. 

Perhaps the biggest difference is that the Internet is a space that often serves as a medium for direct communication between users, such as, for example, scam artists and children. 

How AI Helps

Parents and schools alike are taking advantage of AI software, such as that offered by Bark, a popular control tool provider, to offer 24/7 detection of content that could be potentially harmful to children, such as bullying or worse. 

Such AI tools are trained to locate certain keywords, including internet lingo and abbreviations, and analyze the semantics of messages to discern whether such content poses a risk to the child user. This is done lightning-fast, protecting the children from the content as they encounter it.

An alert can be sent to the parent or school authority in charge of regulating this content. In addition, they can set screen time limits, keeping it well below the average day-a-week of online time (which, it should be mentioned, doesn’t even include TV and movie time.) 

Schools can use AI software to detect concerning content, such as suicidal ideation, within Google Docs or Microsoft Word documents that children use while in school. Other mental health risks are assessed via monitoring software to intervene with students when necessary. 

Summary

AI can be used to regulate content on the internet, but even when bad content slips through, other AI services can be put in place for vulnerable populations, like children, that can regulate what they see, and alert their parents or school authorities that look after them. Service providers like Bark exist to provide content-monitoring software to the parents and schools who wish to intervene in children’s online time.