Currently, tech companies are classified as platforms rather than publishers and are not legally responsible for the content posted. However, many observers believe they should be held accountable for what they allow as posts, interpreting their approval as an endorsement of those ideas. Therefore, these companies should be held accountable for what they choose to allow on their feeds.

Section 230 of the Communications Decency Act outlines that tech companies may participate in “good samaritan moderation” to conserve a safe environment on their platforms without worrying about being sued for inhibiting free speech. Although the word “moderation” introduces subjectivity into the law, there is still no reason large tech companies should allow accounts with large followings to post racist, hateful, or antisemitic content. 

The Supreme Court has recently decided to take up the cases Gonzalez v. Google LLC and Twitter, Inc. v. Taamnehwhich puts the validity of Section 230 in question. The Gonzalez case alleges Google “recommended ISIS videos to users” and “was critical to the growth and activity of ISIS” and is thus legally accountable. The Taamneh case seeks to hold Twitter, Facebook, and YouTube liable for a terrorist attack in Turkey.

Unfortunately, this idea of subjective moderation has created a grey area allowing some large tech companies to outlaw even moderate political posts that disagree with the view of their platforms. Technology companies are expected to balance freedom of speech with the filtering of hateful content that could emotionally damage viewers or incite violent acts. 

Republican politicians across America believe that section 230 has allowed companies to “muzzle” conservative voices, while democrats argue that the law allows the spread of false information. Although this dichotomy is a consequence of large tech companies being able to “moderate” the information on their platforms, it does not excuse Youtube, Twitter, and Facebook from overly curating content and not finding the balance to maintain safety on their platforms.

With a record increase of young people on social media in 2023, it is vital that we create an environment where the rising generation can learn about current events without being exposed to or manipulated by hate speech. Hate groups have become more prominent and widespread throughout the world in part due to the fact that these extremist groups can impose their views on impressionable young people through the internet. 

A recent 2022 study from the Anti-Defamation League, an anti-hate organization founded in 1913, surveyed youths ages 13-17. The study found that 65% of marginalized groups experienced harassment, with LGBTQ+ respondents more likely at 66% vs. 38% for non-LGBTQ respondents. Asia American harassment increased significantly from 21% in 2021 to 39% in 2022. Women (14%) were harassed nearly three times as often as men (5%), and Jewish respondents attributed harassment to their religion (37%) compared to non-Jews (14%). Harassment was most common on Facebook (68%), followed by Instagram (26%) and Twitter (23%). Sadly, 47% of young people in this survey experienced some form of harassment on these social media platforms.

To hinder the growth of organizations such as the KKK (Ku Klux Klan), NSM (National Socialist Movement), and Q’Anon, these influential tech companies must be held responsible for the content on their platforms. Without clear, tangible guidelines in the law, large tech social-media corporations will easily be able to find loopholes to maximize profits. Congress would be less likely to regulate tech companies if they defined clear consequences for violation of hate speech and harassment guidelines, regularly evaluated and publicly reported accurate statistics on hate speech on their platform and quickly removed it, worked with communities targeted by harassment to modify algorithms, and provided data to academic researchers for critical analysis with the goal of better understanding and therefore increasing the likelihood of mitigating online hate.

Yes, freedom of speech is an integral part of the American identity. But, these tech companies have the power to either allow the spread of extreme political views and terrorist ideology or prevent it. With critical issues such as global warming or the Ukraine crisis already plaguing our lives, politicians should do everything in their power to promote a safer, more moderate environment. With that in mind, politicians should ensure that large tech corporations are held accountable for the information they make available to the public.

Loading

  • United States

Leave a Reply

Your email address will not be published. Required fields are marked *