TechGrowing threat: Deepfake fraud poses challenge for information security

Growing threat: Deepfake fraud poses challenge for information security

Growing threat: Deepfake fraud poses challenge for information security
Images source: © Pexels

20 May 2024 15:14

Experts from NASK are sounding the alarm due to the increasing number of frauds using deepfake technology. Videos that falsify the images of celebrities can become the start of serious scams.

Specialists from the Research and Academic Computer Network (NASK) warn about videos created using deepfake technology. This method of manipulation involves artificial intelligence interfering with audiovisual materials, including the voices and faces of known figures. It is used for various purposes, from entertaining and creative (e.g., in movies and art) to more dangerous ones, such as creating fake news, blackmail, or manipulating public opinion.

According to information provided by NASK experts, there are increasing numbers of frauds based on videos using public images appearing online. For example, these could be videos featuring people like footballers or a Minister. Through the "lip sync" technique, an artificially generated voice is matched with the mouth movements and gestures of the figure on the screen. As a result, the person on the screen appears to be saying words they never actually said. Worse yet, properly using this technique gives the illusion of natural speech.

How do deepfakes work?

The head of NASK's deepfake analysis team explains that today's technologies allow criminals to manipulate audiovisual materials easily. The "text-to-speech" feature enables criminals to create new audio tracks based on just a few seconds of recorded voice. They can then synchronize this track with any video, such as a speech or political rally. On the other hand, with "speech-to-speech" technology, where intonation and emotions are more complex, a longer fragment - at least a minute of the original material - is needed.

It is worth adding that "text-to-speech" (TTS) technology analyzes voice samples, learning its specifics and intonation, and then generates speech based on the entered text. According to studies conducted by Google, the TTS model named WaveNet can generate very naturally sounding speech that is difficult to distinguish from a real voice.

Meanwhile, advanced algorithms used in STS can capture subtle nuances of the voice, such as modulation, tempo, and emotions, which makes it difficult for traditional methods of detecting forgery to differentiate generated speech from authentic speech. Even modern biometric systems may struggle to identify such forgeries, posing a severe challenge for security experts and the defence against misinformation.

A NASK expert appealed to social media users to exercise caution regarding unverified or suspicious video content, especially those that could influence the public perception of significant figures and institutions.

The use of deepfakes in political campaigns, for example, to discredit candidates, can even affect election results and thus destabilize democratic processes. Moreover, spreading false information through deepfakes can undermine the credibility of authentic content, leading to increased skepticism among audiences.

How can one verify if a material is a deepfake?

NASK emphasizes that such frauds are becoming increasingly difficult to detect. Artificial intelligence continues to develop, allowing for a more precise generation of fake voices. Nevertheless, experts note that detecting such materials is still possible, provided a thorough technical analysis of the video and its content is conducted.

Specialists point out several possible signs of fraud, including distortions around the mouth, problems reproducing teeth, unnatural head movements and facial expressions, errors in word inflection, or unusual intonation. They add that criminals increasingly use techniques of adding noise and spots to disrupt the image's clarity, conceal artifacts generated by artificial intelligence, and confuse deepfake detection algorithms.

NASK notes that various social engineering techniques are employed in such videos to manipulate the viewer, such as promises of quick profit, time-limited exclusivity of the offer, urging to act, and appealing to the viewer's emotions.

How much has the number of deepfakes increased in recent years?

In recent years, the number of deepfakes has increased significantly, as confirmed by various sources. The Sumsub report from 2023 indicates a tenfold (10x) increase in the number of deepfakes worldwide between 2022 and 2023, with notable regional increases: 1740% in North America, 1530% in Asia and the Pacific, 780% in Europe, 450% in the Middle East and Africa, and 410% in Latin America.

On the other hand, Onfido reports that in 2023, attempts at fraud using deepfakes increased 31 times, constituting a 3000% increase compared to the previous year. Meanwhile, Sentinel reported that between 2019 and 2020, the number of deepfakes online rose from 14,678 to 145,227, signifying an increase of about 900%.

These data indicate a rapid acceleration in developing and using deepfake technology, posing a growing challenge for information security and privacy. Therefore, many experts emphasize the need for implementing more advanced methods of detecting deepfakes and strengthening legal regulations to counteract their negative impacts.

Related content