Beyond Human. Trust in Machines and AI.

A public debate on the opportunities and challenges of AI was held on January 22, organized by the Aspen Institute Central Europe in cooperation with Opero and with the support of Microsoft and the Neuron Fund.

The evening commenced with welcoming words by both organizers, Jiří Schneider from the Aspen Institute CE and Pavel Přikryl from Opero. The following opening remarks were made by Elliot Gerson, Executive Vice-President of the Aspen Institute, who presented the mission, goals and founding principles of the Aspen Institute and the worldwide Aspen network.

Daria Hvížďalová, JHV-Engineering / Neuron Fund, who chaired the evening, introduced the three speakers and talked about the importance of discussion on a topic as complicated as AI.

The opening presentation was granted to Tomáš Mikolov, Research Scientist, Facebook, who started his talk by listing the most critical questions about AI: how to govern AI research to maximize its benefits, how to guarantee democratic values and democracy, how to ensure that social good does not become an afterthought, and how to ensure that AI is used to drive social good. According to Mr. Mikolov, by AI we usually mean machine learning, which is just a subset of AI. He emphasized the role of human biases, which are a problem, because when the choice of training examples is biased, the self-training can copy the bias of the human users. Machine learning therefore often amplifies existing human biases. There is no ideology or racism in mathematical code, only in humans. As with any new technology, no one can guarantee that it won’t be misused. However, understanding is the basis for proper use.

Cornelia Kutterer, Senior Director EU Government Affairs at Microsoft, responsible for AI & Privacy and Digital Policies, represented a business point of view. One of the reasons why we need to understand computers and AI better is that they start understanding the world like humans and can cope with, e.g. object and speech recognitionmachine reading comprehension or machine translation at almost the same level as humans do. Ethics is also an integral part of the discussion in the sense of building an acceptable ethical and empathic framework for it. This will be important to avoid anxiety and fear of inappropriate use of AI. Microsoft has established the AETHER Committee to identify, study and recommend policies, procedures and best practices for the challenges and opportunities that AI will create. They identified three areas where AI needs special attention: privacy, bias and democratic freedom. There are already good examples of use of AI in practice. In India, a missing child was found thanks to facial recognition technology. On the regional level, the European Commission also started to prepare guidance for trustworthy AI and the regulation that will be finalized only by the next Commission.

Aleksandra Przegalińska, Assistant Professor, Kozminski University and Research Fellow at MIT, Center for Collective Intelligence, understands AI as just an umbrella term that covers many sub-fields. Ms. Przegalińska presented some of their research regarding chat-bots, such as the Tay Bot by Microsoft, who was taught to be a racist by other Twitter users. Another example was Deepmind, which is capable of processing different mental states, different intentions and emotions. It won’t itself be conscious, but it will know that people have consciousness, which embeds a philosophical concept into AI. Chat-bots, or other humanoids, often hit the “uncanny valley”, a certain degree of human likeness that people find odd and they don’t want to interact with it anymore. In the research they also compared text chat-bot with a chat-bot with avatar and sound. The latter seemed rather odd to the people, since appeared to be closer to human being, but they knew it is not the case. Trust is also a very important category when dealing with AI. The research also revealed some dimensions of trust, such as expertise, privacy or anthropomorphism. Ms. Przegalińska mentioned algorithmic bias and explainability as the main current challenges.

In the following discussion the speakers tackled the pop culture that is influencing the discourse around emerging technologies. It has a specific narrative, always looking for stories, enemies and drama, which is not desirable for the further development of AI. The discussants agreed that it is better to approach AI with a positive attitude, but to always ask and be curious about possible scenarios. When discussing people’s attitudes, context is extremely important. Disabled people could perceive it largely as a positive, while many industries might see both positives (higher efficiency) and negatives (lower demand for workforce). Depending on the sector, a different level of explainability also might be needed to assure safety. Even though AI seems rather mysterious, machine learning is very inclusive, e.g. universities publish it open source. Liability is another question that needs to be addressed. It is fundamental mainly for working on the market. Some believe that the best approach might be to establish a completely new set of laws for AI. Lastly, the important role that ethics should play in AI is often neglected.

However, the discussants did not agree on the necessity and tightness of the regulation of AI. Development and deployment of AI should be differentiated in this manner. But over-regulation and hindering the development of AI could cause a lot of damage in the future.

If you are interested in this topic and wish to learn even more, you can watch the whole debate below.

Sdílejte tuto stránku na sociálních sítích

Chcete být součástí Aspen Institute CE?

Podpora od našich firemních partnerů, jednotlivých členů a dárců je zásadní pro rozvoj aktivit institutu. Zúčastněte se eventů, diskuzí u kulatého stolu, fórech nebo konferencích, kde debatujeme s předními odborníky témata, která hýbou dnešním světem.

Cookies
Tyto webové stránky používají k poskytování svých služeb soubory cookie. Více informací o souborech cookie získáte po kliknutí na tlačítko "Podrobné nastavení". Můžete nastavit soubory cookie, které budeme moci používat, nebo nám můžete dát souhlas s používáním všech souborů cookie kliknutím na tlačítko "Povolit vše". Nastavení souborů cookie můžete kdykoli změnit v zápatí našich webových stránek.
Cookies jsou malé soubory uložené ve vašem koncovém zařízení, do kterých se ukládají určitá nastavení a údaje, které si prostřednictvím prohlížeče vyměňujete s našimi stránkami. Obsah těchto souborů je sdílen mezi vaším prohlížečem a našimi servery nebo servery našich partnerů. Některé soubory cookie potřebujeme k tomu, aby naše webové stránky mohly správně fungovat, jiné potřebujeme pro analytické a marketingové účely.