The European Union is finally making good on its promise to rein in social media giants. Unfortunately, the fire it is trying to put out has already spread.
It is time for the EU to stop playing defence against misinformation, deception and manipulation. While the Digital Services Act will be an important step in the right direction, it does not address the root causes of political polarisation in Europe. The EU must break down the walls that divide us and insert itself into the public squares.
In August 2020, a loose conglomerate of far-right groups, conspiracy theorists and esoterics attempted to storm Germany’s parliament. In hindsight, this doomed-to-fail coup pales in comparison to the far larger and ultimately more successful siege on the US capitol on 6 January 2021. Each event occurred absent of distinct leadership. It appears both mobs were self-organised through various social media, resembling a hive mind, fuelled by various conspiracy theories.
A quick Google search for “dangerous conspiracy theories” will not lead the innocent internet user to any such theory. Instead, the first suggestion will be the European Commission’s extensive guide on identifying conspiracy theories, complete with infographics and alt-text, backlinks to social media, bulleted lists and more keywords than Google’s favourite mashed potato recipe. The Commission decided to play by the rules of the Google search algorithm.
The guide to identifying conspiracy theories resembles a recently released set of materials to teach internet literacy to European students at secondary education levels. Unfortunately, without binding implementation in all member states, expectations remain low. The fact that older internet users are more susceptible to fake news further complicates internet literacy efforts overall.
Public service announcements warning of misinformation miss the target entirely. People will actively choose to believe misinformation because it gives them a sense of purpose and belonging, which the existing institutions have not been able to do. Dissatisfied or mistrusting of their government, these people fall prey to misinformation and manipulation online.
How is this possible?
Social media platforms are designed to keep their users engaged. The more time users spend on their platforms, the more revenue can be generated by presenting them with ads. Additionally, users are profiled by data collected on their interests to present targeted content and ads.
Social media’s business model is immediately undermined by the simple fundamentals of our human existence. As enragement guarantees engagement, attention-seeking algorithms inevitably boost the most outrageous content. A universe of echo chambers emerges, many of which reinforce extremist beliefs and dangerous misinformation.
A noteworthy amplifier of conspiracy theories between 2015 and 2018 was YouTube, which was eight times more likely to set a flat-earth theory on the user’s autoplay queue than any other video. YouTube decided that freedom of speech does not equal freedom of reach and implemented complex changes to its algorithm in January 2019, managing to reduce recommendations of conspiracy theories by 70% within 12 months.
Overnight, Twitter managed to decrease misinformation on the US elections by 70% after banning the sitting president, Donald Trump, and about 70,000 QAnon accounts from its platform. The messaging platform routinely bans accounts linked to governmental interference.
In February 2021, Facebook banned the Myanmar military from its website in response to the military coup. Three years prior, the platform had been abused by members of the Myanmar military to organise the Rohingya genocide.
Social media platforms have finally recognised their publicity and economic interests in creating positive environments which don’t burn out their users from existential dread, anger and fear. The current discourse around censoring misinformation online ignores this and instead risks further alienating sceptics by giving the perception of government censorship.
Tech companies and governments do not establish truth. Neither do echo chambers. The Digital Services Act demands algorithm transparency on platforms. The next step must be to build safer algorithms that limit the creation of echo chambers. Platforms could boost content from trusted users and websites, allow community moderation or highlight sensible debates instead of making one-line insults go viral.
The time has come for the EU to quit speaking softly and enforce data regulation. It must finally put real constraints on user profiling. While tailoring content based on user interests has allowed countless niche creators and businesses to find an otherwise unreachable audience, profiling for and by political interests has given rise to predatory campaign targeting, foreign interference and targeted manipulation through misinformation.
Despite YouTube’s 2019 changes to its autoplay function, one-quarter of its most viewed videos on the 2020 pandemic contained misinformation. This time, traffic was directed from outside the platform through smaller communities on Facebook, Whatsapp, Twitter or Telegram.
Misinformation no longer needs to prey on its victims by gaming social media algorithms. Instead, it has entered a stage of community transmission, as users proactively share misinformation with friends, colleagues and relatives.
Therefore, regulating online platforms cannot be the only tool in the fight against the spread of misinformation. The EU and its member states must coordinate efforts to increase digital literacy across all age groups and credibly involve citizens across the continent in policymaking. The pandemic-driven acceleration of digital forms of participation can support these efforts.
The Conference on the Future of Europe will be the next big push, but it should be one of many tangible projects to connect citizens and policymakers.