Should social media platforms be required to police the content that their users post?
Should social media platforms be required to police the content that their users post?
In today’s world, social media has become an intrinsic and indispensable part of society. Platforms such as Twitter, Facebook and Instagram have had a significant and pivotal role in the way that society shares, receives and utilises information. There are many who dispute that the practical advantages and benefits of social media far outweigh its disadvantages. However, social media platforms present several dangers that have led to many believing that the content users post should be regulated. There is increasing concern surrounding harmful, dangerous and false content online. This content includes cybercrime, terrorism, and other material promoting violence, self-harm and substance abuse. Furthermore, the amount of age-inappropriate, sexual and pornographic material being posted and repeatedly shared is shocking. Nonetheless, many argue that media platforms not only enable society to facilitate information and news both efficiently and rapidly, but they also create and build a sense of free speech, community and connectivity. In addition, social media platforms allow small businesses to promote themselves and grow. Moreover, by regulating and restricting content users post, is it a breach of human rights to restrict the freedom of speech that social media provides society with? This essay will outline some of the issues emerging by the prolific use of social media, and examines why it should be a requirement for platforms to regulate user content in order to protect society from the harm propagated by social media platforms.
To address this issue, it is important that various definitions are highlighted. Firstly, although there is no one acknowledged definition, the term ‘social media’ can be described as a website, or a forum, that enables and facilitates its users to create and share information and content. This would include platforms like Facebook, Twitter, Instagram, Tick-Tock, YouTube, Snapchat and WhatsApp. A ‘regulation’ can be defined as a rule, or a law, that is issued by an executive authority with the purpose of moderating and controlling actions. In the context of this essay, it is to be argued whether social media platforms should be encouraged to ‘regulate’ their user content.
Firstly, social media platforms should regulate the content their users post since the amount of content on social media platforms that is used to promote, incite and encourage terrorism and violence, is alarming. One of the greatest global terrorist threats to humanity was from the doctrines of Al-Qaeda and ISIS. These organisations attempted to destroy democracies by using social media to recruit and infiltrate communities. The deadly riot on Capitol Hill is another example of violence which was heavily fomented on social media platforms. The riot was caused by the factually incorrect ‘free-speech’ comments posted by Donald Trump on Facebook and Twitter, which were then used to incite an uprising against an Institution of Democracy. In Myanmar, social media has facilitated religious and ethnic genocide against the Rohingya; UN human rights investigators concluded that hate speech on Facebook played a fundamental role in inciting the violence. A report commissioned by Facebook has admitted that it failed to prevent regulating hate speech and fake conspiracy theories shared on its platform, and actually created an “enabling environment”[1] implicit in racial genocide in Myanmar. Recently, Facebook was again used to fuel grotesque violence by the Myanmar army posting inflammatory falsehoods; compelling Facebook to react with the indefinite account suspension of the Myanmar army.
Furthermore, stimulated excessively by the current youth generation’s use of social media, the number of drug and alcohol consumption, self-harm and suicide posts on platforms are rising dramatically. Studies have additionally outlined content of alcohol use being circulated on sites such as Facebook and Twitter.[2] These social media displays are likely to enhance ‘normative’ impressions amongst teenagers of the substance. Alcohol marketing is increasingly apparent on social media, giving brands the ability and opportunity to connect and interact with young people and “develop brand loyalty."[3] Also, celebrities are constantly promoting drug and alcohol use on social media; celebrity advertising makes smoking and drinking seem like exciting and invigorating activities, which can pressure and encourage teens to replicate their behaviour. In fact, research has revealed that advertising may be responsible for up to 30% of adolescent tobacco and alcohol use.[4] The authors of one study have examined the presence of marijuana messages on social media, reporting that most “marijuana-related tweets reflected a positive sentiment toward its use, with pro-marijuana tweets outnumbering anti-marijuana tweets by a factor greater than 15.”[5] The authors also found that 59% of those tweets were estimated to be sent by youth under the age of 20.[6] Moreover, a study in England between 2014 and 2015, based on data from investigations carried out by a range of official bodies including Child Death Overview Panels, found that 23% of suicides among under-25-year-olds followed suicide-related Internet use. They also discovered that within the 21-year-olds in England, 22.5% reported suicide or self-harm related Internet use.[7] These horrifying statistics demonstrate the urgent need for social media platforms to regulate their user content in order to safeguard young people.
In addition, content should be regulated by social media platforms because of the increased number of reports in the posting and spreading of child pornography, child sexual exploitation and child abuse. In a month-long period during lockdown, the Internet Watch Foundation blocked at least 8.8 million attempts by UK internet users to access videos and images of children suffering sexual abuse.[8] In addition, reports of child abuse online have risen from 110,000 globally in 2004 to 18.4 million in 2019.[9] A 2016 study of 11–16-year-olds, jointly conducted by the NSPCC, found that 13% of boys and girls had taken a topless picture of themselves and 3% had taken fully naked pictures. Furthermore, 55% had shared these pictures with others while 31% had shared the image with someone that they did not know. [10] Social media platforms also provide new and easy opportunities for child groomers to target and initiate their abuse. With so many children using social networks and gaming sites in the modern age, it means that they are becoming increasingly exposed to the threats of groomers and their abuse or exploitation. For instance, live streaming on several social media platforms, such as Instagram, has provided numerous occasions for groomers to manipulate and coerce children into extreme forms of abuse. These shocking facts and figures alone are grounds for social media platforms to impose regulations on user content as an obligation to protect the vulnerable and innocent in society from becoming ‘objects’ of exploitation that will scar society for generations.
Conversely, there are those who argue that social media platforms should not regulate content that users post because by modulating user content, there is a breach of a basic human right by limiting and moderating freedom of speech. ‘Freedom of speech’ is defined as a core democratic value and is the right to express thoughts and opinions without restrictions. In most Western democracies today, under constitutional laws such as the First Amendment of the United States and Article 8 of the UK Human Rights Act, freedom of speech is recognised as a basic human right. A drastic example of restriction of free speech is China’s totalitarian government, which have issued a complete censorship of social media. Consequently, politicians in some countries, including the United States, argue that social media companies have gone too far with moderation and regulation, at the expense of free speech. Additionally, many maintain the view that social media plays a prominent role in the expression of freedom of speech today; therefore, by imposing regulations on user content, their notion of free speech will be violated. However, it can be argued that the freedoms social media platforms allow may be abused. The UK Safer Internet Centre cited its 2016 report, based on a survey with 1,500 13–18-year-olds, in which 82% said they had witnessed ‘online hate’—that they had “seen or heard offensive, mean or threatening behaviour targeted at or about someone based on their race, religion, disability, gender, sexual orientation or transgender identity”. Furthermore, almost a quarter surveyed said they had been the target of online hate in the last year because of the qualities mentioned above.”[11] By imposing regulations on the content that users post, social media platforms will be able to prevent the harms caused to society, such as those mentioned above. Moreover, free speech regulations do not protect individuals from private institution censorship, giving companies like Twitter and Facebook the right to create and impose their own regulations that can restrict the speech and content of their users.
Additionally, others may put forward the view that unrestricted social media aids small businesses to promote themselves freely. Consequently, they argue that by regulating social media content, it becomes much more challenging for small firms to grow. In addition, a few people may choose to take this a step further and argue that by regulating content, companies’ revenues may be affected. Some argue that regulations stifle innovation and create monopolies. The high costs of adhering with regulations inhibit and obstruct competition because start-ups are discouraged from entering markets. A journalist for the ORF comments that “for a country that desperately needs innovation, entrepreneurship, and investment, social media regulations will discourage new investments”[12] On the other hand, this argument can be easily contradicted with the fact that there exist many dangers for small businesses on social media platforms. For instance, hateful, biased reviews along with negative publicity may severely cripple the reputation of an emerging business and hinder it from growing. Hence, if social media platforms regulate the content that users post, then perhaps the potential harm posed to small businesses may be limited.
Weighing up the ramifications of all the challenges presented by competing arguments, social media platforms should and must impose regulations on user content. However, regulation of user content may pose a few problems, such as differing platforms disagreeing with what constitutes harmful content, and various countries having different norms. Another difficulty that social media platforms may encounter is the potentially excessive cost of regulation. Furthermore, platforms are currently not incentivised to regulate, since by implementing content restrictions, users may abandon platforms, having a knock-on effect on advertising revenues and hence platform profitability. Nevertheless, there is no doubt that social media platforms do require some form of regulation. One solution is for platforms to self-regulate their content that is compatable with the country’s government guidelines.
In summary, social media platforms should proactively regulate user content. This is not only due to the shocking rise in violence and terrorism enabled by social media platforms, but also due to the disturbing and dangerous age-inappropriate content posted on platforms. Therefore, regulating content is an immediate and absolute requirement.. It is also the moral and rational duty of public platforms in democracies to not be complicit in fostering the propaganda of mass hatred and crime. Also, is it really freedom of speech to license the spread of falsehoods, such as in the case of Donald Trump? Furthermore, does freedom of speech mean lies, hate, child pornography and incitement to violence? The moderation of “hateful speech” is not a violation of human rights, but a protection of peoples' rights to freedom and safety. Accordingly, platforms should have to moderate the content that their users post as a way of preventing these iniquities from destroying the safe and free society that regulations aim to create. Consequently, social media platforms should work in collaboration government frameworks designed to protect the fabric of society.
[1] https://www.bbc.co.uk/news/world-asia-46105934
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4432862/
[3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5658796/
[4] https://www.addictioncenter.com/community/social-media-teen-drug-use/
[5] https://pubmed.ncbi.nlm.nih.gov/25620299/
[6] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5658796/
[7] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6263311/
[8] https://researchbriefings.files.parliament.uk/documents/CBP-8743/CBP-8743.pdf
[9] https://www.theguardian.com/media/2019/apr/08/social-media-firms-to-be-penalised-for-not-taking-down-child-abuse
[10] https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/822/822.pdf
[11] https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/822/822.pdf
[12]“ ”https://www.orfonline.org/expert-speak/government-should-not-regulate-social-media-57786/