Disinfo Competition – Taming the Trolls and Algorithms

Disinformation has always been here. The digital transformation has increased its impact and spread dramatically. The reach and influence of social and online media, in general, is much bigger than the reach of any traditional media or any government of the world. Algorithms govern today’s society and will do so even more in the future. Welcome to the reign of algorithms— welcome to the disinformation competition.

On 24–25 September 2019, Aspen Institute CE brought to Prague more than 40 distinguished professionals, researchers, experts and practitioners from various areas of society with the most diverse perspectives possible mainly from Central European countries. For one day, starting with an evening public event, they discussed the current state of disinformation and its possible future development. Particular attention was paid to the digital information environment focused on disinformation techniques, including the role of online and social media in people’s perception and acceptance of disinformation. Tools and measures were explored which can contribute to fostering the resilience of societies and individuals against manipulated information.

During the presentations, there was a discussion on today’s media and the overall disinformation landscape. The following points were made by the speakers and the audience. The reach and influence of social and online media, in general, is much bigger today than the reach of any traditional media or any government of the world. The propaganda and disinformation mostly start on the web and are amplified on social media (mainly Facebook). Online and traditional media rarely reflect on and adapt to topic-based microtargeting and the atomization caused by social networks merely offering just what content the user wants to see. This results in negative user feedback which results in a loss of trust in the media and institutions in general. The social media algorithm works using AI and all the data. One of the views is that algorithms govern today’s society and will do so even more in the future. It is therefore important to be able to protect ourselves and our societies. The principles of social networks may contradict the society principles developed over the last centuries.

It is very important to think proactively about developing digital literacy and civic education programs that can help people be better prepared for the information they are going to encounter online.

One of the important things that came up during the discussions, tackling the complex problem of disinformation, was that “it was critical to focus on the importance of protecting free expression and freedom of information in the digital space”. No regulation will ever serve for the good of any society if fundamental rights can be violated. Blocking or banning media outlets is not an answer to the phenomenon of disinformation and propaganda, as it may easily lead to censorship.

Furthermore, “it is an imperative for governments to bring the fight against disinformation to their national security strategies and actually bring the struggle against disinformation to the executive cabinet-level”, which plays a crucial role in securing democracy and naturally not only fighting but preventing attacks before they happen. “Objective reporting, independent news and information to a broad group of citizens are ultimately the last line of defense when it comes to countering disinformation”, another speaker stated during the debate.

It is very important to think pro-actively about developing digital literacy and civic education programs that can help people be better prepared for the information they are going to encounter online.

The Reign of Algorithms

The algorithms of social networks are in charge of spreading the content we create. Every algorithm—trying to deliver the desired content, to keep the user on the platform as long as possible and monetize the time spent on the platform—also has various pernicious side effects: e.g. Twitter using the principle of shouting loud to be heard, which is easily manipulated by bots and disinformation spreaders (automated amplification effect), YouTube with the autoplay function serving up more and more extreme content to keep you watching (the extremization effect), or Facebook encapsulating users in content bubbles, further assuring them with hyper-targeted content causing radicalization and atomization effects.

The algorithms of social networks currently drive the distribution of content, which is still created by humans. By means of technology and AI development, we are slowly approaching an era, where machine-generated content—such as text, but also video and images—will be hard to discern from the human creation. This will change the perception of creating and distributing the content, which will be fully automated and based on content personalization and microtargeting for the user. The fake news then becomes a more fundamental threat because it will be based on an algorithm improving itself and looking much more like trustworthy, human-created content. The main question will be the role of humans in an automated and AI society—our individual (human) integrity should be addressed as an issue.

People tend to think that the news we disagree with is disinformation. We all tend to think that these are the others, who are misled by fake news and believe in disinformation—in fact, all of us are vulnerable to it.

Objectively, disinformation has always been here and can be identified focusing on various aspects, such as dubious sources, no separation of opinions and information, lack of facts, no corrections or the amplification on social networks and other platforms. People tend to think that the news we disagree with is disinformation.

We all tend to think that these are the others, who are misled by fake news and believe in disinformation—in fact, all of us are vulnerable to it. The most important thing is to acknowledge that you can be manipulated as well.

We Really Can Combat Disinformation

There are several positive experiences supporting the claim that we really can combat disinformation, although it is not easy. In order to do so, there has to be a system or a set of tools and approaches, addressed by the media houses and journalist on the one hand, and the governments, the public and the private sector on the other. The following recommendations have been made by 5 working groups with an inspiring mix of backgrounds on the part of the participants:

While disinformation in the digital world erodes the roots of democracy more than ever before, it has become more important to understand the role and all aspects of digital technology to defend democratic principles.

The media should:

—  focus on quality journalism, cover challenging topics, show and high- light the sources, raise the standards of journalism and increase the trust in media by proper journalist processes,
—  highlight and multiply the content across various platforms, change the perception of getting qualitative information,
—  find new ways and technologies for fact-checking (real time fact-checking in TV shows and online),
—  explain and point out facts by infographics, images and statistics,
—  tell people what fact-checking is,
—  respond faster and be proactive (facing the entities or governments which are very adaptive in using digital technologies spreading disinformation),
—  bring more diversity to the media market and expand media services to areas without access to information.

The governments and politicians –in cooperation with public and private sectors– should:
—  not to be too restrictive in information regulation,
—  primarily be guarding freedom of speech,
—  label real media to distinguish from disinformation spreaders,
—  work on increasing media literacy of various target groups,
—  support the development of fact-checking technologies,

—  create a fact-checking working group bringing together various fact-checking organizations to combine resources and expertise along with traditional media, reporters and even government ministries to promote cooperation,
—  look for local representatives in villages or towns to monitor their communities for fake news impacts,
—  when using regulations focus on the social media algorithm regulation,
—  focus on how to demonetize the disinformation sites to reduce the possibilities of spreading their content,
—  support tools and programs to build up trust in institutions and political parties (e.g. by supporting codes of conduct about not using disinformation, bots),
—  involve a governmental cybersecurity strategy team,
—  promote fact-checking and raise awareness of the elections (using campaigns, fact-checking working groups, counter speech groups, online campaign targeting the disinformation consumers, etc.),
—  involve tech companies and scientists in handling these issues.

Code of Conduct, Shining for Better Times

A separate discussion was dedicated to the possibility of developing and establishing a Code of Conduct to be agreed on between all the political parties, promising not to use disinformation, bots, trolls, or amplification tools. There was an agreement among the workshop participants that even if the Code of Conduct is not legally binding, and there is little hope that all the parties will adhere to it, it has been seen as an important positive step, as progress, and as an opportunity to raise awareness.

There were several reasons given as to why (even anti-democratic) parties would sign such a Code of Conduct (e.g. motivated by its own PR “protecting the country and citizens” against disinformation). The observance to the Code might be initially supervised by the public, experts, civil society or other political parties, without any legal binding or penalties. It may develop over the years in small steps, and in the future, for example, funds and airtime on TV during political campaigns could be tied to compliance with the Code of Conduct.

Disinformation has always been here. The digital transformation has increased its impact and spread dramatically. While disinformation in the digital world erodes the roots of democracy more than ever before, it has become more important to understand the role and all aspects of digital technology and AI, to be used to face disinformation effectively and defend democratic principles.

Democratic states and societies have to increase their ability to protect themselves, but when doing so, fundamental rights have to be guarded and the positive impact of the development of digital technologies must not be threatened; digital technologies empower freedoms such as free access to information, the public’s right to know and the right of individuals to seek and receive information and ideas of all kinds regardless of borders. These must not be violated. States should promote a free, independent and diverse communication environment, including media diversity, which are crucial tools to address disinformation and propaganda.

The impact of digital spreaders’ activities has to be minimized, be it unfriendly states or non-state bodies. Eventual restrictions and regulations may only be imposed on the right to freedom of expression and freedom of the media in accordance with international law. Countries and societies also have to look for innovative solutions, which can react in time and appropriately to malicious use of digital technology, be it in the field of disinformation manipulating individuals and distorting public opinion, or any other cyber threat.

The workshop was organized by Aspen Institute CE within a series of conferences, seminars and workshops organized by Aspen Institute Germany, Aspen Institute Spain and Aspen Institute Central Europe under the title Tech and European Society looking at the societal impacts of digital technologies and AI.

Read also the interview with one of the workshop speakers Jamie Fly, the President and CEO of Radio Free Europe / Radio Liberty  or watch a video with the speakers’ interviews at aspn.me/WatchDisinfo.

Jenda Žáček

is a freelance brand strategist—consultant, lecturer, and communicator. He helps others with development & company strategy, campaigns, communications & PR, political marketing and NGOs. He joined the Aspen Institute in 2016 and was responsible for overall communications and the rebranding in 2017. Now he is the Publishing Editor of the Aspen Review and led the magazine redesign.

In the past, Jenda served as Spokesperson of Junák—Czech Scouting, Head of PR department and Spokesperson of the Czech Ministry of Agriculture or the Head of Communication and Spokesperson at the Czech Green Party. Today he is freelance. Jenda is active in the topics of communication studies and media ownership. He graduated from the Faculty of Social Sciences at Charles University in Prague in marketing communication & public relations, and media studies.

Sdílejte tuto stránku na sociálních sítích

Chcete být součástí Aspen Institute CE?

Podpora od našich firemních partnerů, jednotlivých členů a dárců je zásadní pro rozvoj aktivit institutu. Zúčastněte se eventů, diskuzí u kulatého stolu, fórech nebo konferencích, kde debatujeme s předními odborníky témata, která hýbou dnešním světem.

Cookies
Tyto webové stránky používají k poskytování svých služeb soubory cookie. Více informací o souborech cookie získáte po kliknutí na tlačítko "Podrobné nastavení". Můžete nastavit soubory cookie, které budeme moci používat, nebo nám můžete dát souhlas s používáním všech souborů cookie kliknutím na tlačítko "Povolit vše". Nastavení souborů cookie můžete kdykoli změnit v zápatí našich webových stránek.
Cookies jsou malé soubory uložené ve vašem koncovém zařízení, do kterých se ukládají určitá nastavení a údaje, které si prostřednictvím prohlížeče vyměňujete s našimi stránkami. Obsah těchto souborů je sdílen mezi vaším prohlížečem a našimi servery nebo servery našich partnerů. Některé soubory cookie potřebujeme k tomu, aby naše webové stránky mohly správně fungovat, jiné potřebujeme pro analytické a marketingové účely.