Stop blaming AI. Instead, embrace it

Artificial intelligence may be a rising threat to information integrity in news, but that threat began long before generative AI at the hands of lazy reporting and sloppy journalism, experts said.

Artificial intelligence may be a rising threat to information integrity in news, but that threat began long before generative AI at the hands of lazy reporting and sloppy journalism, experts said.

The Aspen Institute Central Europe and Aspen Digital collaborated in an event on September 26, 2023 to discuss AI and its impact on news and media. I went into the event–as any eager young reporter would–expecting to hear the worst: all the dangers of AI, robots will one day take our jobs, and the journalism field is doomed. Instead, I left with the opposite impression.

“Whatever role you have as a journalist, AI is going to be in it, so you need to know about it to do your job well,” Charlie Beckett, specialist in AI and journalism at The London School of Economics and Political Science, advised.

When the generative AI site ChatGPT launched in November 2022, it opened the world up to hands-on AI use. Because of the buzz around the site, generative AI is commonly thought of as the only form of AI when really, AI is an umbrella term. Various technologies and algorithmic functions–none of which are new to the tech world–exist under this “AI” umbrella; generative AI is just one of the newest functions. The main threat of generative AI is its inaccuracy. AI has creativity, but this “creativity” often leads to mis and disinformation. Simply put, it likes to make things up. It is incapable of determining fact from fiction when instructed to generate original content. So while it is scary to see this technology generate disinformation with ease, it’s important to acknowledge the plethora of disinformation that already exists in the mediascape without AI’s help.

“I was quite surprised with how much disinformation was out there, involuntarily spread and produced by humans, by journalists who would get something wrong, feeding into their own confirmation bias,” Tanit Koch, a columnist for The New European, revealed.

In this context the panelists discussed that though generative AI does threaten information integrity to a degree, a pre-existing threat to trustworthy reporting is the fallibility of journalists. AI was created to do what humans do, which would naturally include the ability to make mistakes.

“Yes it’s biased, yes it makes mistakes, but journalists do that a lot as well,” Beckett admitted.

In terms of mis and disinformation, AI cannot be held solely responsible. As long as democratic systems exist, disinformation cannot be eradicated because it is protected by free speech.

“If you want pluralism, if you want a variety of opinions in society, you cannot use regulation so stringently that the problem of disinformation will go away,” Johann Laux, tech expert at the University of Oxford, explained.

After we have accepted that humans make mistakes, AI makes mistakes, and disinformation is here to stay, we can shift the focus to a solution-based discussion.

Historically, journalism has always needed to adapt to advancing technology to maintain relevance.

The goal in journalism is to work quickly and diligently to tell a story. Truthful reporting, fact checking, and source protection will always be essential criteria in journalism.

The panelists agreed that AI is just the newest innovation to be embraced and utilized for these purposes, rather than feared and vilified.

“As a person dealing with the media, you have an incredible power in checking what the media is doing. So actually, this makes the industry faster, more savvy about what we do, and more careful and precise in our work,” Koch stated.

The panelists highlighted AI’s efficiency and cost-effectiveness in optimizing resources. They mentioned several specific functions that aid and expedite research like its archive function, which uses key search instructions to identify and pull archives that would otherwise take journalists months to uncover.

The generative function doubles as a useful rough-draft editor capable of fixing grammatical errors and typos without needing a human editor.

Language models are another new development, which have recently allowed the media to translate news and audio into numerous different languages, reaching a wider audience.

“Suddenly, the world is your marketplace. And suddenly, you don’t have to be able to speak languages like English or German to be able to understand English or German journalism,” Beckett said.

AI gives journalists and media great power, which means great responsibility must follow.

Regulations ensure this power is being used responsibly so all users can enjoy the technology’s various benefits while being protected from its dangers, the panelists agreed. The regulations are based on ethical AI usage rather than content production. When AI is used responsibly and transparently–as a tool rather than a source–the content produced should be trustworthy.

“Users are less interested in whether the text was AI generated or not. They are more concerned with trust, whether they can trust the information,” Michal Pěchouček, Technology Executive Gen Digital and Professor of AI (CVUT), revealed.

Though AI’s efficiency and generative abilities may be threatening, journalists can rest assured that the risks now are the same risks as before, just in a new form. And while AI can be weaponized and misused, it can also be utilized by journalists to combat those threats. Journalists have a responsibility to experiment with various AI functions and familiarize themselves with the technology to effectively combat actors abusing it.

The purpose of AI has always been to efficiently mimic certain human capabilities such as understanding, sensing, reasoning, data processing and decision making. The fear is that AI will potentially outsmart humans in these areas.

“AI is overcoming human communication capabilities…because AI progressed to language and generative AI. Until recently, AI was analytical…they now analyze and generate,” Pěchouček said.

The future of AI remains unknown, and this unpredictability is where the pending threat lies. It presents opportunities for both innovation and abuse.

“The greatest AI danger is the same place as the greatest AI opportunity” Pěchouček concluded.

The fight is not against emerging AI tech; the fight lies in who is best utilizing it. Journalists should continue being held accountable for their own reporting without blaming their mistakes entirely on generative AI. The technology can benefit journalism if the step is taken to understand and embrace it.

How does that age-old saying go?

Keep your friends close and your enemies closer.

Alexa Wandersee

is an American journalist based in Prague, Czech Republic; currently studying for a B.A. in Journalism and Media Studies at Anglo-American University.

Share this on social media

Support Aspen Institute

The support of our corporate partners, individual members and donors is critical to sustaining our work. We encourage you to join us at our roundtable discussions, forums, symposia, and special event dinners.

These web pages use cookies to provide their services. You get more information about the cookies after clicking on the button “Detailed setting”. You can set the cookies which we will be able to use, or you can give us your consent to use all the cookies by clicking on the button “Allow all”. You can change the setting of cookies at any time in the footer of our web pages.
Cookies are small files saved in your terminal equipment, into which certain settings and data are saved, which you exchange with our pages by means of your browser. The contents of these files are shared between your browser and our servers or the servers of our partners. We need some of the cookies so that our web page could function properly, we need others for analytical and marketing purposes.