Taming the Beast

The disruptive power of the new media, perversely called “social,” has been building up for years. Only now the Western democracies start to understand its capacity to distort the reality. It is high time we did something about it.

In many ways, the 2016 US presidential elections were unique. Not only the elected president, Republican candidate Donald Trump, got three million (!) votes less than his beaten rival, Democrat Hillary Clinton, but also something even more unprecedented happened: probably for the first time in the history of democratic campaigning, lies and untruths dominated the public debate, depriving facts of meaning.

This is not a bloated rhetorical figure. In 2016 the lies really trumped the facts. They did it online: according to the (now famous) BuzzFeed investigation, the 20 top-performing fake stories from hoax sites and partisan-biased blogs scored 8,711,000 shares, reactions, and comments on Facebook, the world’s biggest social platform. At the same time, 20 best-performing election stories from 19 major news websites generated a total of 7,367,000 shares, reactions, and comments on Facebook. Given the fact that social media, together with other Internet platforms are the most important source of information for many Americans (in fact, they are the number one medium for 18–29 years old), we can only guess how decisive was the flood of falsehoods for the actual result of the elections.

The post-truth communication is not only confined to the United States of America. Online-based social platforms (Facebook, Twitter etc.) are gaining importance in Europe too. As a matter of fact, no other medium has recorded similar increase in popularity in the last years. In November 2015, the majority of EU citizens considered Internet to be the third most important source of information (just after the TV and radio), with 59 percent of them using it every day. In autumn 2015, roughly half of Europeans used social media at least once a week, a whopping 15 percentage points more than in 2011.

Just like in the US, social media in Europe also swarm with untruths and deceptions. During the pre-referendum campaign in Great Britain on- line platforms spread the false information related to the British presence in the EU (like the false claim that UK’s membership costs British taxpayers 350 million pounds weekly). More recently, in the wake of Italian constitutional referendum, online debate was poisoned by the affluence of fake news pumped by the sites linked (or sympathetic to) the populist Five Star Movement. To a large extent, the same phenomenon could have been observed during the electoral campaign in Poland in 2015. One can wonder how distorting the fake content must have been, given the fact that social platforms are of high esteem in Poland: according to the last available Eurobarometer study, as much as 53 percent of Poles believe that they are reliable.

The problem of the post-truth communication environment has been gaining in importance in recent months. Prominent political leaders—including the US President Barack Obama and the German Chancellor Angela Merkel—raised questions about the possible negative impact of the uncontrolled flood of fake news.

The question is, what can be done about it?

Two solutions seem possible. One can be applied relatively quickly, the other would require more time but, at the same time, it could yield more lasting results.

The first solution could be called “communication counteroffensive.” There are many institutions and actors who suffer—directly or indirectly— the consequences of the post-truth communication context. Take the European Union as an example. Radical populist forces in Europe differ from one another, but they share one thing in common: all of them aim for the weakening if not outright destruction of the EU (like in the case of French Front National). The pre-referendum campaign in Britain is just one of the examples how efficiently social media could be used to spread anti-European propaganda based on lies. For most of the pre-referendum campaign, the “Leave” camp was able to set the momentum of the online debate with its emotional and entertaining messages (as colorful lies can be). Available research shows that on Twitter, the “Leave” campaign beat “Remain” in a staggering proportion: for every single pro-EU tweet, there were seven pro-Leave. On Instagram (one of the social platforms said to be civilized and hate-free), 35 per- cent of referendum-related posts were pro-Leave. Only seven percent incited to vote “Remain.”

That is symbolic. It illustrates well that the “Remain” camp lost the communication battle. Lessons from this failure should be drawn by all political actors, institutions, non-governmental organizations, citizens themselves, and everyone interested in defending the truth in the public debate and, in parallel, interested in defending the political constructs of liberal democracy (the EU is just one of the examples). Those constructs are under attack from populists. They should be confronted with the same level of engagement, with similar emotions. Pro-European campaigns must become more visible and better targeted at the same time. EU’s presence in the new media should be multiplied, the content of the pro-European communication must be tailored to the expectations of the targeted publics. All this requires more resources (both human and material) spent, first on researching and then on reaching people through the well-known quartet of “owned,” “shared,” “earned,” and “paid” media. Above all, the new communication effort must be focused on carefully-chosen resonating issues and needs to be sustained over long time. It cannot be yet another flash-in-the-pan.

The second solution would require much more time and most probably an incomparably bigger amount of political will, public discussion, and industry consultation. It would offer bigger rewards though. The solution in question is… regulation.

When the radio and, a few decades later, the TV entered the market, they provoked a deep revolution in mass communication. Unlike the “social media,” both the TV and radio were subjected to relevant laws right from the outset. The US Radio Act was passed as early as in 1927, only five years after the wireless became popular in America (besides that the radio stations were also subject to licensing from the very beginning). When the first regular television broadcast went into service in 1928, the legal background had been already set and the Federal Communication Commission existed. Licenses were applied, content was subject to verification. When the TV became popular, like it did in the United States in 1950s, the “decency standards” were thoroughly enforced.

Although “social media” cannot be compared simplistically to the “old” media, they share some fundamental traits. Looking from the business model angle, the resemblances are striking: as Grzegorz Piechota has explained, they aggregate audiences and extract value by selling advertisers the access to audiences. Business-wise, Facebook is then no different from other media companies. Content-wise, there are some similarities too. Facebook does not generate content of its own, but its users profit from the newsfeeds curated by the platform’s algorithms, like the old-fashioned readers who profit from human-edited news and opinions.

There are several aspects of the social platforms functioning that need to be regulated or at least made transparent.

First comes the urgent issue of verification of the news content (truth). No “old” medium (like the press, TV, or radio) would dare to constantly publish blatant lies—for fear of possible legal and financial consequences (not to mention lost credibility). Should for example a TV station engage in hate speech, repeated defamation etc., then in most democratic countries the media market regulator would intervene. If steps are not taken on voluntary ba- sis, states are left with no choice—the social platforms should be obliged to inform their users that the posted content is unverified (or false). In most unambiguous situations, social platform administrators should edit or block the defamatory and dangerous content.

Of course, one could say that online platforms’ terms of use, accepted by default by every user, already allow their administrators to block the content deemed illicit. The point is these rules are neither transparent nor reasonably applied. Regulating the platforms’ current discretion in blocking or removing content seems particularly important in the wake of the notorious case of Norwegian daily Aftensposten, whose profile was censored by Facebook after it published the famous picture of Vietnamese girl Phan Thi Kim Phúc, burned by napalm in an American attack in 1972.

The Aftenposten case is a symbol of the platforms’ irrational and excessive reaction to the content published. Nevertheless, there are numerous examples of insufficient reaction (or the total lack of it). The problem dates back to distant past: as early as in 2001, it took French court to order Yahoo (then a very popular web platform) to block French users form viewing Nazi memorabilia put on auction on Yahoo’s online sites. To sum up: proper regulation is needed to both ensure that appropriate action is taken and to stop potential abuse of the censoring powers. The increased social platform legal responsibility should also translate into easier and more transparent access for citizens (or any other potential plaintiff ) to the relevant judicial mechanisms.

The other problem to be regulated is the question of anonymity. Some of the social platforms apply the “real name” policy, some do not. Again, rules are often flouted. Unsolicited impersonations pose real problems, so do anonymous profiles spreading false and sometimes offensive content. It is quite paradoxical that currently in Europe one cannot buy anonymously a prepaid SIM card but he or she can communicate anonymously on social platforms.

The problem of “silos creation” must be urgently addressed too. Today’s social platforms, especially Facebook (the biggest of them all), apply algorithms that decide what content is shown in the news feed of any given user. The exact construction and the way the algorithms function is a well-guarded corporate secret. What we know is that algorithms are responsive to the previous content choices of users and they end up with aggregating of like-minded groups of people, firmly locked in “bubbles” or “silos.” To simplify: if a given user choses the particular content (radical right opinions, for example), after certain time his (or hers) news feed will be automatically showing only this kind of content. To make thing worse, at no point in time the user in question is asked to accept the automatically imposed lockup. So, if such person is unaware of the mechanism, she or he might be genuinely convinced that social platform is dominated by people sharing her (his) views. Which is not necessarily the truth.

The list of contentious issues that might potentially require some sort of regulation is much longer. There are social platforms’ relations with the “old” media or the use of users’ personal data gathered while they roam the platforms. Also the usage and copyright protection of users’ generated content (like the photos and videos posted on social platforms) should be addressed.

It is not going to be easy, but with time passing in becomes more and more necessary. As I said in the beginning, social media are becoming the dominant mass communication tool. Demographic trends suggest that they will overtake the “old” media in the coming days and they will achieve this while operating in a legal void.

It is not by accident that TV and radio were heavily regulated from the outset. Apparently, founding fathers of modern Western democracies understood the crucial role of mass communication better than us.

Konrad Niklewicz

Konrad Niklewicz is currently the Wilfried Martens Centre for European Studies Visiting Research Fellow and managing director of the Civic Institute and a guest lecturer at the University of Warsaw. He previously served as Spokesperson for the Polish Presidency of the Council of the EU and Undersecretary of State at the Ministry of Regional Development. Previously he was a journalist and editor of the Gazeta Wyborcza.

Share this on social media

Support Aspen Institute

The support of our corporate partners, individual members and donors is critical to sustaining our work. We encourage you to join us at our roundtable discussions, forums, symposia, and special event dinners.

These web pages use cookies to provide their services. You get more information about the cookies after clicking on the button “Detailed setting”. You can set the cookies which we will be able to use, or you can give us your consent to use all the cookies by clicking on the button “Allow all”. You can change the setting of cookies at any time in the footer of our web pages.
Cookies are small files saved in your terminal equipment, into which certain settings and data are saved, which you exchange with our pages by means of your browser. The contents of these files are shared between your browser and our servers or the servers of our partners. We need some of the cookies so that our web page could function properly, we need others for analytical and marketing purposes.