
Photo: Getty Images
Ukrainians eager to adopt new technology to verify information
Within just a few days of its launch, more than 400 people had applied for a course on how to use artificial intelligence to verify information and detect manipulation — twice the programme’s capacity. Demand for AI awareness as a convenient factchecking tool likely reflects a broader rise in media literacy and critical thinking in Ukraine. Evaluating facts has become a daily routine for many since the start of Russia’s full-scale invasion.
Since 2022, the information war has been an integrated part of the full-scale conflict against Ukraine, featuring fake news, disinformation, manipulative content, and narratives aimed at polarising society. Through online outlets and social media, Ukrainian audiences have been flooded with toxic content ranging from falsified frontline reports to narratives questioning the stability of partnerships with European allies and attempts to lure people into totalitarian sects based in Russia and China.
In response, Ukrainians have had to strengthen their ability to decipher media content. International support programmes, including projects implemented by IMS, play a key role in enabling Ukrainians to improve their information hygiene and maintain access to quality journalism.
As RomanKifliuk, IMS’s national advisor for Ukraine, notes—citing national research by NGO Detector Media—the overall media literacy index rose significantly after the start of the full-scale war: from 55% in 2021 to 81% in 2022, marking a 26-point increase. Since the onset of the war, the index has remained high and has not returned to 2021 levels. The lowest media literacy scores are seen among those aged 56–65, individuals with lower educational backgrounds, and those with minimal financial means—groups often targeted by information attacks.
“A significant portion of today’s content—especially toxic material—is created using artificial intelligence technologies. Audiences now face greater difficulty distinguishing facts from AI-generated imitations. AI can fabricate realistic images, videos, and voice recordings, which poses a challenge. At the same time, the only way to ensure access to reliable information is to master these technologies and cultivate the habit of factchecking. AI can address a range of tasks and serve the interests of audiences,” commented RomanKifliuk. According to 2025 research, more than 72% of respondents do not use AI and therefore are unaware of the opportunities this technology offers.
AI Academy students develop a new culture of technology use
In the spring of 2025, when teams from the Centre for Democracy and Rule of Law (CEDEM), the national media literacy project “Filter” and IMS announced the first course at the AI Academy ‘PROMPTO,’ the organisers received over 400 applications within three days.
“Before the programme began, we anticipated strong interest and a clear need, but we did not expect to receive so many applications,” said Anna Nesterenko, coordinator of the national media literacy project “Filter.” “At one stage, we had to pause accepting participants, expand the student group to 200, and invite everyone else to join subsequent course intakes.”
According to Anna Nesterenko, “Filter”—a project led by Ukraine’s Ministry of Culture and Strategic Communications—coordinates key initiatives and brings together media literacy stakeholders to raise awareness among all citizen groups on mindful content consumption. It develops practical educational materials that help Ukrainians distinguish quality journalism and factual reporting from fake and manipulative content. One of Filter’s priorities is promoting responsible AI usage among Ukrainians.
In the AI Academy ‘PROMPTO’ participants are not only taught, but also actively involved in creating factchecking content and publishing media and AI literacy materials. This helps spread knowledge of responsible AI use to a wider audience. Approximately, a quarter of participants work in education, with another quarter in the public sector. The age spans from 18 to 62 years.
The Academy’s programme covers a wide range of topics:
- How disinformation spreads in the digital age: the roles of social media, AI, algorithms and bots.
- AI tools to counter disinformation and verify facts.
- Prompting basics for AI.
- Creating visual content with AI.
- AI regulation worldwide: overview of international standards and Ukrainian legislation.
- Legal aspects: accountability for AI-created content.
- Ethical considerations for developing AI-based educational content.
“For us, it was important to make this training very practical, so the programme includes not only theory but also numerous practical assignments, real case analysis, and discussions,” explained Oleksiy Tretyakov Grodzevych, project manager at CEDEM. “We also included essential modules on AI regulation in the EU, as Ukraine moves toward alignment. Another focus is using AI-generated content in compliance with copyright principles.”
Education is part of Ukraine’s AI strategy
As noted by Olesia Kholopik, CEDEM director, the educational initiative and launch of the AI Academy is one element of CEDEM’s broader work on AI regulation in Ukraine in alignment with EU standards.
“We are very grateful to IMS, our partners and donors for enabling us to continue our activities despite challenges, supporting democratic processes in the country, and developing a shared experience with European partners in technology use,” emphasised Olesia Kholopik.
Since 2023, CEDEM has led efforts to shape Ukrainian AI legislation. In 2023 and 2024, they participated in plenary sessions of the Council of Europe’s Artificial Intelligence Committee, submitted amendments to the Framework Convention on AI (including public debate provisions), and supported civil society positions on AI definitions. At the initiative of Ukraine’s Ministry of Digital Transformation, CEDEM and the Expert Committee on AI introduced a Voluntary Code of Responsible AI Use, outlining key principles for the sector. CEDEM currently serves as the secretariat for Ukraine’s AI self-regulation process and is finalising a memorandum on AI self-regulation in Ukraine.