New program tackles deepfake verification with zero-knowledge proof
Scientists from the IDEAS NCBR research and development centre have created a prototype program that detects deepfakes, enabling verification of whether the person visible online is real.
16 July 2024 19:36
Deepfakes allow for the creation of realistic multimedia materials, such as video and audio recordings. They are utilized in entertainment and education, but scammers also successfully use deep fakes. This situation presents a significant challenge, so internet users must remain vigilant. Fortunately, there are tips on how to recognize deep fakes. The next step in combating fraud is the creation of a specialized program.
Deepfake is a challenge
"Deepfakes are becoming more advanced, and detecting them is increasingly challenging, which presents us with new challenges in the field of digital security," said researchers from IDEAS NCBR, a research and development centre in artificial intelligence.
Scientists from the institution, Shahriar Ebrahimi (postdoc) and Parisa Hassanizadeh (PhD candidate), originally from Iran, have created an experimental project called ProvenView. They work in the research group "System Security and Data Privacy," led by Prof. Stefan Dziembowski. The project uses a cryptographic technique called Zero-Knowledge Proof (ZKP) that allows the authenticity of information to be confirmed without revealing its details.
The creators of ProvenView emphasise that the tool can be used in online interactions and content protection. For example, after an online meeting, a user will be able to verify if the video from their interlocutor really came from their webcam.
"This solution can be useful both for individual users, who could use it to authenticate online meetings, and content creators, who could strengthen the protection of their image published in a YouTube video from theft and misuse, e.g., in pornography," explained Shahriar Ebrahimi, quoted in the document.
How ProvenView works: Digital proof of authenticity
Scientists illustrated the operation of ProvenView with a simple example. At the beginning of the process, the original video author creates a digital proof of authenticity, confirming that the given multimedia material comes from them. The software that enables the generation of such proof can be integrated directly with the video recording application or function as a separate program. Then, when someone wants to verify the video's authenticity, they can view this digital proof.
"And this is where Zero-Knowledge Proofs help us: thanks to advanced mathematics, this proof can be verified without needing to view the video itself. Anyone can generate such proofs of authenticity on their own device," Shahriar Ebrahimi explained. He added that in the future, this solution could become an extension to a web browser or a video editing program.
ProvenView is an experimental program and is not yet ready for large-scale deployment.
Ebrahimi emphasized that, at the moment, proofs of authenticity can only be generated after the video creation is complete, and they can be obtained – to verify the origin of the multimedia material – about an hour after the conversation. Scientists from IDEAS NCBR hope that in the future, it will be possible to prove the authenticity of videos in real time.