South Korean Police Develops Deepfake Detection Tool


Read more on deepfake policy changes ahead of the 2024 elections:

South Korea’s police forces are developing a new deepfake detection tool that they can use during criminal investigations.

The Korean National Police Agency (KNPA) announced on March 5, 2024 to South Korean press agency Yonhap that its National Office of Investigation (NOI) will deploy new software designed to detect whether video clips or image files have been manipulated using deepfake techniques.

Unlike most existing AI detection tools, traditionally trained on Western-based data, the model behind this new software was trained on 5.2 million pieces of data from 5400 Koreans and related figures. It adopts “the newest AI model to respond to new types of hoax videos that were not pretrained,” KNPA said.

The tool will be able to determine whether video content has been artificially generated using AI technology in about five to 10 minutes.

Police said the software has an 80% probability of detecting whether a video is authentic.

It will be immediately integrated into NOI’s investigative processes. However, the South Korean police forces said they would use the data to direct the investigation rather than as direct evidence.

They also plan to minimize the chances of erroneous detection by cross-checking with AI experts, including academia and companies regarding electoral disinformation offenses.

South Korea: A Deepfake Uptick in the Run-Up to the Elections

This announcement comes one month before South Korea’s legislative elections, scheduled for April 10.

The campaign in the run-up to this election has already been marred by a surge in AI-powered misinformation and disinformation.

In February, the National Election Commission (NEC) of South Korea reported that 129 instances of AI-generated media content (fake video and audio) violating a newly revised election law had been detected from January 29 to February 18.

The 70 NEC staff members who conducted the investigation found that most of the deepfakes were spread on social media platforms by manipulating videos of opposing candidates, distorting some parts of their speeches, or even altering them entirely.



Source link