{"id":28731,"date":"2019-01-29T00:00:00","date_gmt":"2019-01-28T23:00:00","guid":{"rendered":"https:\/\/aspeninstitutece.softmedia.cz\/news-article\/beyond-human-trust-in-machines-and-ai\/"},"modified":"2019-01-29T00:00:00","modified_gmt":"2019-01-28T23:00:00","slug":"beyond-human-trust-in-machines-and-ai","status":"publish","type":"news-article","link":"https:\/\/www.aspeninstitutece.org\/cs\/news-article\/beyond-human-trust-in-machines-and-ai\/","title":{"rendered":"Beyond Human. Trust in Machines and AI."},"content":{"rendered":"<p>A public debate on the opportunities and challenges of AI\u00a0was held on January 22, organized by the\u00a0<strong>Aspen Institute Central Europe<\/strong>\u00a0in cooperation with\u00a0<strong><a href=\"https:\/\/opero.cz\/en\" target=\"_blank\" rel=\"noopener noreferrer\">Opero<\/a><\/strong>\u00a0and with the support of\u00a0<a href=\"http:\/\/www.microsoft.com\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Microsoft<\/strong><\/a>\u00a0and\u00a0the <strong><a href=\"https:\/\/www.nfneuron.cz\/en\" target=\"_blank\" rel=\"noopener noreferrer\">Neuron Fund<\/a><\/strong>.<\/p>\n<p>The evening commenced with welcoming words by both organizers,\u00a0<strong>Ji\u0159\u00ed Schneider<\/strong>\u00a0from the Aspen Institute CE and<strong>\u00a0Pavel P\u0159ikryl<\/strong>\u00a0from Opero. The following opening remarks were made by\u00a0<strong>Elliot Gerson<\/strong>, Executive Vice-President of the Aspen Institute, who presented\u00a0the mission, goals and founding principles of the Aspen Institute and the worldwide Aspen network.<\/p>\n<p><strong>Daria Hv\u00ed\u017e\u010falov\u00e1<\/strong>, JHV-Engineering \/ Neuron Fund, who chaired the evening, introduced the three speakers and talked about the importance of discussion on a topic as complicated as AI.<\/p>\n<p>The opening presentation was granted to\u00a0<strong>Tom\u00e1\u0161 Mikolov<\/strong>, Research Scientist, Facebook, who started his talk\u00a0by listing the most critical questions about AI: how to govern AI research to maximize its benefits, how to guarantee democratic values and democracy, how to ensure that social good does not become an afterthought, and how to ensure that AI is used to drive social good. According to Mr. Mikolov, by AI we usually mean\u00a0<em>machine learning<\/em>, which is just a subset of AI. He emphasized the\u00a0<em>role of human biases<\/em>, which are a problem, because when the choice of training examples is biased, the\u00a0<em>self-training can copy the bias of the human users<\/em>. Machine learning therefore often\u00a0<em>amplifies existing human biase<\/em>s. There is no ideology or racism in mathematical code, only in humans. As with any new technology, no one can guarantee that it won\u2019t be misused. However,\u00a0<em>understanding is the basis<\/em>\u00a0for proper use.<\/p>\n<p><strong>Cornelia Kutterer<\/strong>, Senior Director EU Government Affairs at Microsoft, responsible for AI &amp; Privacy and Digital Policies, represented a business point of view. One of the reasons why we need to understand computers and AI better is that they\u00a0<em>start understanding the world like humans<\/em>\u00a0and can cope with, e.g.\u00a0<em>object and speech recognition<\/em>,\u00a0<em>machine reading comprehension<\/em>\u00a0or\u00a0<em>machine translation<\/em>\u00a0at almost the same level as humans do.\u00a0<em>Ethics<\/em>\u00a0is also an integral part of the discussion in the sense of building an acceptable ethical and empathic framework for it. This will be important to\u00a0<em>avoid anxiety and fear<\/em>\u00a0of inappropriate use of AI. Microsoft has established the\u00a0<em>AETHER Committee<\/em>\u00a0to identify, study and recommend policies, procedures and best practices for the challenges and opportunities that AI will create. They identified three areas where AI needs special attention: p<em>rivacy, bias <\/em><em>and<\/em><em> democratic freedom<\/em>. There are already good examples of use of AI\u00a0in practice. In India, a missing child was found thanks to facial recognition technology. On the regional level, the European Commission also started to prepare guidance for trustworthy AI and the regulation that will be finalized only by the next Commission.<\/p>\n<p><strong>Aleksandra Przegali\u0144ska<\/strong>, Assistant Professor, Kozminski University and Research Fellow at MIT, Center for Collective Intelligence, understands\u00a0AI as just an\u00a0<em>umbrella term<\/em>\u00a0that covers many sub-fields. Ms. Przegali\u0144ska presented some of their research regarding\u00a0<em>chat-bots<\/em>, such as the Tay Bot by Microsoft, who was taught to be a racist by other Twitter users. Another example was Deepmind, which is capable of processing different mental states, different intentions and emotions. It won\u2019t itself be conscious, but it will know that people have consciousness, which embeds a philosophical concept into AI. Chat-bots, or other humanoids, often hit the <em>\u201cuncanny valley\u201d<\/em>, a certain degree of human likeness that people find odd and they don\u2019t want to interact with it anymore. In the research they also compared text chat-bot with a chat-bot with avatar and sound. The latter seemed rather odd to the people, since appeared to be closer to human being, but they knew it is not the case.\u00a0<em>Trust<\/em>\u00a0is also a very important category when dealing with AI. The research also revealed some dimensions of trust, such as\u00a0<em>expertise, privacy or anthropomorphism<\/em>. Ms. Przegali\u0144ska mentioned\u00a0<em>algorithmic bias<\/em>\u00a0and\u00a0<em>explainability<\/em> as the main current challenges.<\/p>\n<p>In the\u00a0<strong>following discussion<\/strong>\u00a0the speakers tackled the\u00a0<em>pop culture<\/em>\u00a0that is influencing the discourse around emerging technologies. It has a specific narrative, always looking for stories, enemies and drama, which is not desirable for the further development of AI. The discussants agreed that it is better to approach AI with a\u00a0<em>positive attitude,<\/em>\u00a0but to always ask and be curious about possible scenarios. When discussing people\u2019s attitudes,\u00a0<em>context<\/em>\u00a0is extremely important. Disabled people could perceive it largely as a positive, while many industries might see both positives (higher efficiency) and negatives (lower demand for workforce). Depending on the sector, a different level of explainability also might be needed to assure safety. Even though AI seems rather mysterious, machine learning is\u00a0<em>very inclusive<\/em>, e.g. universities publish it open source.\u00a0<em>Liability<\/em>\u00a0is another question that needs to be addressed. It is fundamental mainly for working on the market. Some believe that the best approach might be to establish a completely new set of laws for AI. Lastly, the important role that\u00a0<em>ethics<\/em>\u00a0should play in AI is often neglected.<\/p>\n<p>However, the discussants did not agree on the\u00a0<em>necessity and tightness of the regulation of AI<\/em>. Development and deployment of AI should be differentiated in this manner. But over-regulation and hindering the development of AI could cause a lot of damage in the future.<\/p>\n<p>If you are interested in this topic and wish to learn even more, you can\u00a0<strong>watch the whole debate below<\/strong><strong>.<\/strong><\/p>\n<p><iframe loading=\"lazy\" title=\"Beyond Human. Trust in Machines and AI. (whole debate)\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/VYfcX7ji2pc?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A public debate on the opportunities and challenges provided by AI was held on January 22, in Opero.<\/p>\n","protected":false},"featured_media":18319,"template":"","meta":{"_acf_changed":false},"news-tag":[588],"class_list":["post-28731","news-article","type-news-article","status-publish","has-post-thumbnail","hentry","news-tag-sit-nextgen"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.aspeninstitutece.org\/cs\/wp-json\/wp\/v2\/news-article\/28731","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aspeninstitutece.org\/cs\/wp-json\/wp\/v2\/news-article"}],"about":[{"href":"https:\/\/www.aspeninstitutece.org\/cs\/wp-json\/wp\/v2\/types\/news-article"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aspeninstitutece.org\/cs\/wp-json\/wp\/v2\/media\/18319"}],"wp:attachment":[{"href":"https:\/\/www.aspeninstitutece.org\/cs\/wp-json\/wp\/v2\/media?parent=28731"}],"wp:term":[{"taxonomy":"news-tag","embeddable":true,"href":"https:\/\/www.aspeninstitutece.org\/cs\/wp-json\/wp\/v2\/news-tag?post=28731"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}