Scroll Top

Like a game of cat and mouse – We organized a workshop at CTU on the latest developments in deepfake detection

Bez názvu (1170 x 540 px)

On May 27, we held a unique workshop on detecting deepfakes at the Czech Technical University in Prague on Karlovo náměstí. In three thematic blocks, experts presented the latest findings in the field of new AI tools for detecting fake content, outlined the current challenges for fact-checkers, and pointed out how Czechs and Slovaks fare when faced with such news.

 

The field of information manipulation is a constant game of cat and mouse. Its creators use AI to increasingly improve the manipulation of visual information, while experts in detection and fact-checking strive to identify this content. In the first part of the workshop, Jan Čech from the visual recognition group at FEE CTU demonstrated how quickly fake videos are improving in quality, making it difficult to distinguish between authentic and deepfake visual content. Obvious visual artifacts are no longer always present, making it difficult for ordinary users and experts relying on the naked eye to tell the difference. Therefore, experts are rapidly developing AI-based tools to detect them.

Jan Čech first presented the gradual development of generative models, such as the open-source model called Stable Diffusion, or the latest audio-enabled video generator from Google called VEO3. Although these models are not primarily intended for creating misleading content, such use cannot be effectively prevented.

 

 

Jan Čech sees a great danger in digitally altered faces of well-known politicians such as Volodymyr Zelensky, Donald Trump, and Andrej Babiš. Technologies such as Face Reenactment, Face Swap, and Lips Post-syncing are used for this purpose.

Visitors were also interested in ethical issues related to the development of similar tools in open-source mode, where anyone can download the tool without restriction for further editing and distribution. While this approach facilitates technological development, it also opens up opportunities for misuse to create false or misleading content.

So how can you spot visual deepfakes, according to Jan Čech?

  • Look for distracting elements: What does the face look like? What do the fingers look like? Don’t forget to focus on the teeth and lip movements.
    • These so-called artifacts most often appear in lower-quality AI content. In the case of deepfakes, this is still the majority.
  • Pay attention to the harmony of movement and sound: If a character is speaking in the video, check that their lips and facial muscles are moving in sync with the sound.
  • Use automatic tools: Such as GPT Zero, Copyleaks, InVID WeVerify, or other academic detectors based on CLIP (Contrastive language image pretraining) technology. However, keep in mind that their reliability is limited.
  • Check the context: If Andrej Babiš is offering you lucrative investments in unknown companies, it is most likely not a real video.

In the second part of the program, participants learned about the results and methodology of CEDMO Trends longitudinal research from CEDMO data analyst Lukáš Kutil from Charles University. This research has been monitoring the behavior of the Czech and Slovak populations in terms of their consumption of various types of media content, with a focus on information disorders such as disinformation and misinformation. So what do people notice most when recognizing (un)true videos?

Among the 16 technical and content parameters tested in the Perceived Deepfake Trustworthiness Questionnaire, the most dominant among Czechs are inconsistency between sound and mouth movements, unnatural voice, and lighting anomalies (such as flickering). These indicators allow people to reliably identify deepfake videos.

 

 

Among the most risky factors on the audience side are limited experience with using the internet, but also young age (under 24). People with one of these characteristics are more likely than others to fail to recognize a deepfake video.

The third part of the program was led by Jan Fridrichovský, editor-in-chief of Demagog.cz, who introduced visitors to the current challenges for fact-checkers and what this work entails in today’s world.

Right from the start, Fridrichovský pointed out that even though AI-generated content is now commonplace on the internet, most content is still created by people themselves using Photoshop. Many images that fact-checkers evaluate as false are not necessarily disinformation. Often, people are simply trying to be satirical and are not necessarily aiming to deliberately manipulate the public.

Jan Fridrichovský also presented the collaboration between Demagog.cz and META – Facebook operator. The debate was also sparked by the fact that Meta does not usually delete posts marked as “false” by fact-checkers on Facebook, but only limits their distribution and also limits other identical content. However, as Fridrichovský admits, the entire process is automated and not 100% consistent. Some posts thus avoid being flagged even though they are identical to a post that fact-checkers have checked.

 

 

Visitors also learned about the issue of verifying audio content, where they cannot rely on visual artifacts such as blurred details or unrealistic anatomy. Jan Fridrichovský pointed out that verification is more difficult compared to visual content because, unlike video deepfakes, it is not possible to prove AI manipulation in audio content by comparing it with the original authentic recording.

What are the current challenges for fact-checkers according to Jan Fridrichovský?

  • Specific monitoring of AI content is often impossible on social media.
  • Available detection tools do not label content as authentic or AI-generated, but only provide a probability. This is often insufficient for verifiers.
  • Context and clues are difficult to identify: For example, is a photo of a burning house really the aftermath of a terrorist attack? Fact-checkers must uncover the entire context.
  • Verification is time-consuming
  • Retrospective search for origin: Who created the content? Was it spread by a single source or a coordinated group of accounts? Verifiers often search for the original accounts that spread the messages.
  • Politicians and public figures may label even genuine recordings as deepfakes, for example if they reveal facts they do not want to make public.
Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.