- Stop plugging these 7 devices into extension cords - even if they sound like a good idea
- I changed these 6 Samsung TV settings to give the picture quality an instant boost
- I tested a 9,000,000mAh battery pack from eBay that cost $10 - here's my verdict
- The 3 most Windows-like Linux distros to try because change is hard
- This 'unlimited battery' GPS tracker is an integral part of my hikes - and it's on sale
#Infosec24: Deepfake Expert Warns of “AI Tax Havens”
Global AI and deepfake regulations could be seriously undermined if countries intentionally decide to allow irresponsible products to be built within their jurisdictions, a leading AI expert has warned.
Speaking at the opening keynote of Infosecurity Europe 2024 this morning, Henry Adjer argued that although regulation is “fundamentally changing the landscape” of AI development, there are potentially major hurdles ahead.
“There will be different landscapes,” he said. “Different countries will have different attitudes and my concern is we might see the equivalent of AI tax havens – countries that intentionally do not put in place legislation, [in order] to attract investment … but it leads to irresponsible products being built which go on to have a global impact.”
This could have major implications for democracy. Adjer, who described himself as a deepfake/generative AI “cartographer,” claimed that the world is facing a “perfect fake storm” – that is, the use of fake audio of public figures “leaked” to journalists as if it were a legitimate hidden recording.
Read more on deepfakes: Martin Lewis Shocked at Deepfake Investment Scam Ad
Such a recording may already have swung the Slovakian election last year in favor of a populist challenger to the pro-EU Progressive Slovakia Party.
The tactic was on show again when faked audio of Keir Starmer purported to reveal the Labour Party leader hurling a foul-mouthed tirade at an aide.
It will put increasing pressure on journalists, who must decide what is in the public interest to publish and what may merely be mischief-making, or worse, Adjer argued.
The challenge is that as deepfakes become more commonplace, and harder to spot, they also provide plausible deniability to bad actors for real things they’ve done. This “liar’s dividend” will lead to a “poisoning of the well, a corrupting of the information ecosystem,” Adjer argued.
Unfortunately, “the FBI doesn’t really know what to do about this” and “detection tools are often not particularly robust, particularly around critical context,” he added.
False positives and negatives are still a problem, leading Adjer to argue that – while they have a role – detection tools are not the panacea many assume them to be.
Watermarking technology is more robust but could still be undermined by “a high degree of compression on a piece of media,” Adjer said, adding that tools will inevitably appear for removing and corrupting watermarks.
A more sophisticated solution to the challenge of deepfakes is “content provenance” – cryptographically secured metadata which is attached to media the moment it’s captured on a device or generated using an algorithm.
An initiative worthy of note is the Adobe-led C2PA standard, which provides a “nutrition label” to enhance transparency.
“This is the dynamic we need to be looking for moving forward. A world where we look for these secure standards. It’s going to take time and scaling, but this is the way,” Adjer concluded. “But there is no silver bullet when it comes to these challenges.”