
On May 27, we held a unique workshop on detecting deepfakes at the Czech Technical University in Prague on Karlovo náměstí. In three thematic blocks, experts presented the latest findings in the field of new AI tools for detecting fake content, outlined the current challenges for fact-checkers, and pointed out how Czechs and Slovaks fare when faced with such news.
The field of information manipulation is a constant game of cat and mouse. Its creators use AI to increasingly improve the manipulation of visual information, while experts in detection and fact-checking strive to identify this content. In the first part of the workshop, Jan Čech from the visual recognition group at FEE CTU demonstrated how quickly fake videos are improving in quality, making it difficult to distinguish between authentic and deepfake visual content. Obvious visual artifacts are no longer always present, making it difficult for ordinary users and experts relying on the naked eye to tell the difference. Therefore, experts are rapidly developing AI-based tools to detect them.
Jan Čech first presented the gradual development of generative models, such as the open-source model called Stable Diffusion, or the latest audio-enabled video generator from Google called VEO3. Although these models are not primarily intended for creating misleading content, such use cannot be effectively prevented.
Jan Čech sees a great danger in digitally altered faces of well-known politicians such as Volodymyr Zelensky, Donald Trump, and Andrej Babiš. Technologies such as Face Reenactment, Face Swap, and Lips Post-syncing are used for this purpose.
Visitors were also interested in ethical issues related to the development of similar tools in open-source mode, where anyone can download the tool without restriction for further editing and distribution. While this approach facilitates technological development, it also opens up opportunities for misuse to create false or misleading content.
In the second part of the program, participants learned about the results and methodology of CEDMO Trends longitudinal research from CEDMO data analyst Lukáš Kutil from Charles University. This research has been monitoring the behavior of the Czech and Slovak populations in terms of their consumption of various types of media content, with a focus on information disorders such as disinformation and misinformation. So what do people notice most when recognizing (un)true videos?
Among the 16 technical and content parameters tested in the Perceived Deepfake Trustworthiness Questionnaire, the most dominant among Czechs are inconsistency between sound and mouth movements, unnatural voice, and lighting anomalies (such as flickering). These indicators allow people to reliably identify deepfake videos.
Among the most risky factors on the audience side are limited experience with using the internet, but also young age (under 24). People with one of these characteristics are more likely than others to fail to recognize a deepfake video.
The third part of the program was led by Jan Fridrichovský, editor-in-chief of Demagog.cz, who introduced visitors to the current challenges for fact-checkers and what this work entails in today’s world.
Right from the start, Fridrichovský pointed out that even though AI-generated content is now commonplace on the internet, most content is still created by people themselves using Photoshop. Many images that fact-checkers evaluate as false are not necessarily disinformation. Often, people are simply trying to be satirical and are not necessarily aiming to deliberately manipulate the public.
Jan Fridrichovský also presented the collaboration between Demagog.cz and META – Facebook operator. The debate was also sparked by the fact that Meta does not usually delete posts marked as “false” by fact-checkers on Facebook, but only limits their distribution and also limits other identical content. However, as Fridrichovský admits, the entire process is automated and not 100% consistent. Some posts thus avoid being flagged even though they are identical to a post that fact-checkers have checked.
Visitors also learned about the issue of verifying audio content, where they cannot rely on visual artifacts such as blurred details or unrealistic anatomy. Jan Fridrichovský pointed out that verification is more difficult compared to visual content because, unlike video deepfakes, it is not possible to prove AI manipulation in audio content by comparing it with the original authentic recording.