The vagueness of terms such as “hate speech” and “disinformation” allows the EU to influence the recommendation algorithms and terms of service of these websites and to keep any content that goes against their “ideals” away from the spotlight or away from these websites entirely.
If you have been using the internet for longer than a couple of years, you might have noticed that it used to be much “freer.” What freer means in this context is that there was less censorship and less stringent rules regarding copyright violations on social media websites such as YouTube and Facebook (and consequently a wider array of content), search engines used to often show results from smaller websites, there were less “fact-checkers,” and there were (for better or for worse) less stringent guidelines for acceptable conduct.
In the last ten years, the internet’s structure and environment have undergone radical changes. This has happened in many areas of the internet; however, this article will specifically focus on the changes in social media websites and search engines. This article will argue that changes in European Union regulations regarding online platforms played an important role in shaping the structure of the internet to the way it is today and that further changes in EU policy that will be even more detrimental to freedom on the internet may be on the horizon.
Now that readers have an idea of what “change” is referring to, we should explain in detail which EU regulations played a part in bringing it about. The first important piece of regulation we will deal with is the Directive on Copyright in the Digital Single Market that came out in 2019. Article 17 of this directive states that online content-sharing service platforms are liable for the copyrighted content that is posted on their websites if they do not have a license for said content. To be exempt from liability, the websites must show that they exerted their best efforts to ensure that copyrighted content does not get posted on their sites, cooperated expeditiously to take the content down if posted, and took measures to make sure the content does not get uploaded again. If these websites were ever in a place to be liable for even a significant minority of the content uploaded to them, the financial ramifications would be immense. Due to this regulation, around the same period, YouTube and many other sites strengthened their policy regarding copyrighted content, and ever since then—sometimes rightfully, sometimes wrongfully—content creators have been complaining about their videos getting flagged for copyright violations.
Another EU regulation that is of note for our topic is the Digital Services Act that came out in 2023. The Digital Services Act is a regulation that defines very large online platforms and search engines as platform sites with more than forty-five million active monthly users and places specific burdens on these sites along with the regulatory burden that is eligible for all online platforms. The entirety of this act is too long to be discussed in this article; however, some of the most noteworthy points are as follows:
The EU Commission (the executive body of the EU) will work directly with very large online platforms to ensure that their terms of service are compatible with requirements regarding hate speech and disinformation as well as the additional requirements of the Digital Services Act. The EU Commission also has the power to directly influence the terms of conduct of these websites.Very large online platforms and search engines have the obligation to ban and preemptively fight against and alter their recommendation systems to discriminate against many different types of content ranging from hate speech and discrimination to anything that might be deemed misinformation and disinformation.
These points should be concerning to anyone who uses the internet. The vagueness of terms such as “hate speech” and “disinformation” allows the EU to influence the recommendation algorithms and terms of service of these websites and to keep any content that goes against their “ideals” away from the spotlight or away from these websites entirely. Even if the issues that are discussed here were entirely theoretical, it would still be prudent to be concerned about a centralized supragovernmental institution such as the EU having this much power regarding the internet and the websites we use every day. However, as with the banning of Russia Today from YouTube, which was due to allegations of disinformation and happened around the same time the EU placed sanctions on Russia Today, we can see that political considerations can and do lead to content being banned on these sites. We currently live in a world with an almost-infinite amount of information; due to this, it would be impossible for anyone or even any institution to sift through all the data surrounding any issue and to come up with a definitive “truth” on the subject, and this is assuming that said persons or institution is unbiased on the issue and approaching it in good faith, which is rarely the case. All of us have ways of viewing the world that filter our understanding of issues even when we have the best intentions, not to mention the fact that supranational bodies such as the EU and the EU Commission have vested political incentives and are influenced by many lobbies, which may render their decisions regarding what is the “truth” and what is “disinformation” to be faulty at best and deliberately harmful at worst. All of this is to say that in general, none of us—not even the so-called experts—can claim to know everything regarding an issue enough to make a definitive statement as to what is true and what is disinformation, and this makes giving a centralized institution the power to constitute what the truth is a very dangerous thing.
The proponents of these EU regulations argue that bad-faith actors may use disinformation to deceive the public. There is obviously some truth in this; however, one could also argue that many different actors creating and arguing their own narrative with regard to what is happening around the world are preferable to a centralized institution controlling a unified narrative of what is to be considered the “truth.” In my scenario, even if some people are “fooled” (even though to accurately consider people to be fooled, we would have to claim that we know the definitive truth regarding a multifaceted complex issue that can be viewed from many angles), the public will get to hear many narratives about what happened and can make up their own minds. If this leads to people being fooled by bad-faith actors, it will never be the entirety of the population. Some people will be “fooled” by narrative A, some by narrative B, some by narrative C, and so forth. However, in the current case, if the EU is or ever becomes the bad-faith actor who uses its power to champion its own narrative for political purposes, it has the power to control and influence what the entirety of the public hears and believes with regard to an issue, and that is a much more dangerous scenario than the one that would occur if we simply let the so-called wars of information be waged. The concentration of power is something that we should always be concerned about, especially when it comes to power regarding information since information shapes what people believe, and what people believe changes everything.
Another important thing to note is that just because it is the EU that makes these regulations does not change the fact that it affects everyone in the world. After all, even if someone posts a video on YouTube from the United States or from Turkey, it will still face the same terms of service. Almost everyone in the world uses Google or Bing, and the EU has power over the recommendation algorithms of these search engines. This means that the EU has the power over what information most people see when they want to learn something from the internet. No centralized institution can be trusted with this much power.
One final issue of importance is the fact that the EU is investing in new technologies such as artificial intelligence programs to “tackle disinformation” and to check the veracity of content posted online. An important example of this is the InVID project, which is in its own words “a knowledge verification platform to detect emerging stories and assess the reliability of newsworthy video files and content spread via social media.” If you are at all worried about the state of the internet as explained in this article, know that this potential development may lead to the EU doing all of the things described here in an even more “effective” manner in the future.
Economist Peter Schiff Predicts A Financial Crisis That Will Make The Great Depression Look Tame