Deepfakes: Microsoft and others in Big Tech are working to bring authenticity to videos, photos


If you want people to trust the photos and videos your business puts out, it might be time to start learning how to prove they haven’t been tampered with.

Image: Lightspring/Shutterstock

Great (or terrifying) moments in deepfake history: The argument about whether a video of President Joe Biden talking to reporters on the South Lawn of the White House was real (it was). The Dutch, British and Latvian MPs convinced their Zoom conference with the chief of staff of the Russian opposition leader Alexei Navalny was a deepfake. A special effects expert who made their friend look exactly like Tom Cruise for a TikTok video ironically designed to alert people to the dangers of fake footage. Product placement being digitally added to old videos and movies, and Anthony Bourdain’s recreated voice speaking in a documentary. A mother creating fake videos of the other members of her daughter’s cheerleading squad behaving badly in an attempt to get them kicked off the team. How do you know you can trust what you’re looking at anymore?

SEE: The CIO’s guide to quantum computing (free PDF) (TechRepublic)

Businesses are worried about the damage deepfakes—images, video or audio so cleverly altered by artificial intelligence generative adversarial networks to look like someone else that it’s hard to tell they’re not real—could do to their reputation, as well as how they could be used for fraud or hacking and phishing. But in a recent survey run by Attestiv, while 80% are worried about the risk, less than 30% have taken any steps and another 46% have no plan. Those who do hope to rely on training employees to spot deepfakes (likely to be even harder than using training to address phishing) or on automated detection and filtering.

Microsoft has a quiz you can take to see if you can spot deepfakes yourself; that’s less a training tool and more an attempt to increase awareness and media literacy.

Tools like the Microsoft Video Authenticator look for artefacts where the image has been altered that can give away deepfakes that you might not be able to see yourself, but they won’t spot everything. At the moment, the Video Authenticator is only available to news outlets and political campaigns through the AI Foundation’s Reality Defender 2020 initiative, likely because making it broadly available might let the creators of deepfake tools tune them to avoid detection.

the Microsoft Video Authenticator shows users that a video is not authentic

This is how the Microsoft Video Authenticator shows users that a video is not authentic.

Image: Microsoft

“Can we build a detector that can distinguish real reality from this virtual synthesized reality? Deepfakes are imperfect now; you can find all sorts of artefacts,” Microsoft distinguished engineer Paul England said. But because deepfakes are created by multiple tools, the artefacts are different for each deepfake creation technique, and they change as the tools evolve. There’s a window of one or two years where deepfake checking tools will be helpful, but the window will close fast—and tools for detecting deepfakes could actually speed that up.

“You have an AI system that’s creating deepfakes, and you have an AI system that is detecting deepfakes. So, if you build the world’s best detector and you put it in this feedback loop, all you will have achieved is helping your deepfake creator generate better fakes.”

But rather than relying on humans or computers spotting fakes, Microsoft is involved in several initiatives to let creators prove their content hasn’t been manipulated, by vouching for where it comes from and being transparent about what’s been done to it. “We’re swamped in information, and some decreasing fraction of it is actually from where it says it is, and is of what it says it is,” he said. “We need to do something to shore up the more authoritative sources of information.”

“We’re swamped in information, and some decreasing fraction of it is actually from where it says it is, and is of what it says it is. We need to do something to shore up the more authoritative sources of information.”
Microsoft distinguished engineer Paul England

Misinformation isn’t new, but it’s getting much easier to make. What used to need a Hollywood special effects studio and a huge budget can be done in Photoshop or TikTok.

“We’ve had a problem with images for decades now. It’s gotten to the point where it’s accessible to the average user, and the scalability of an attack is so much larger now with the social networks. The impact from these things is much greater, and people’s ability to determine what’s real and what’s fake is eroding rapidly,” warned Azure media security lead Andrew Jenks.

While showing the provenance of content won’t solve the problem of misinformation on the web, he hopes it can be “a small building block to help rebuild trust and credibility.”

Proving truth instead of detecting fakes

Microsoft, Adobe and a range of news organizations are collaborating on several related initiatives that aim to normalise checking where images and video we see come from and whether they’ve been tampered with.

Project Origin is an alliance between Microsoft, the BBC, CBC/Radio-Canada and the New York Times to use a Microsoft technology called Authentication of Media via Provenance for publishing tamper-proof metadata—the GPS location where a photo was taken or the original name of a video clip, say—wrapped in a digitally signed manifest that can be embedded in the file or registered in a Confidential Consortium Framework ledger. The media can also be authenticated by a cryptographic hash; a digital fingerprint that will be different for each file, so editing the file will change the fingerprint—proving that it’s been tampered with. That fingerprint is stored in the image, video or audio file (it might cover mixed reality content in future).

Microsoft is using this to put a digital watermark in audio created by an Azure Custom Neural Voice, so it can’t be passed off as something said by the human who made the recordings the neural voice is based on. It’s also creating an Azure service that content creators like the BBC can use to add hashes and certificates to files as metadata, and a reader that could be a browser extension or embedded in an app to check those certificates and hashes and confirm who the content is from and that it hasn’t been changed since they created it.

SEE: Deepfake reality check: AI avatars set to transform business and education outreach (TechRepublic) 

The Content Authenticity Initiative is a broad group of organizations (including Microsoft) who are interested in content authenticity, led by Adobe which is creating a tool to let Photoshop and Behance users save location data, details of the creator and even the history of every edit made to an image inside that image, again using metadata, so people looking at the image later can see how it was edited.

The two projects cover slightly different parts of the problem. “Adobe has a focus on the creative and editorial process, the workflow through an editorial department or the creative workflow in an art department,” England said. “The Microsoft focus is on the broadcast distribution once you have a finished product and you put it out on the web. You could imagine pushing it all the way back to the source of the media—the cellphone or high-quality camera that’s used to capture it.”

To bring all these pieces together, the Coalition for Content Provenance and Authenticity is combining them into a standard for proving where content comes from, authored by Adobe, Arm, the BBC, Intel, Microsoft, TruePic and now Twitter.

C2PA will allow creators and publishers to vouch for their content and be transparent about any editing that’s been done to it, said Adobe’s Andy Parsons, director of the Content Authenticity Initiative.

“If the BBC wants to vouch for its content and you trust the worldview of the BBC and the BBC’s content sourcing methods and fact checking, we’re proving cryptographically that something that purports to be from the BBC—regardless of where you see it, whether it’s on your desktop [browser] or Twitter or Facebook—is actually from the BBC. C2PA makes that provable and tamper evident: if something has been messed around with in transit, that’s detectable,” Parsons said.

Parsons compared C2PA to HTTPS certificates for web sites—an industry standard that consumers now expect to see on reputable web sites that provides a level of transparency about who you’re talking to but doesn’t guarantee how they will behave—and information rights protection on business documents. That doesn’t stop you taking a photo of the document on screen to share with someone who’s not authorised to see it, but it does stop you pretending you didn’t know you were circumventing the document controls to do that.

“It says nothing about the veracity of what is depicted in an image,” he explained. “It talks about when something was made and who made it: when it was produced, how it was edited, and ultimately how it was published and arrived to the consumer.”

Just like clicking on the lock icon in your browser, with C2PA you’ll see something like a green question mark on Twitter that you can click to see the details. You can see the kind of information that will be available by uploading an image to the Verify tool on the Content Authenticity Initiative site. If you have an image with C2PA metadata, Parsons said, “you’ll be able to see the before and after thumbnails, and you can do a side-by-side comparison to see what changed.”

That might even include the original image if the camera supported C2PA. Qualcomm and TruePic have been trying out an early draft of the standard in prototype devices that added the content authenticity data as the frames were captures. “We were able to make edits in Photoshop and then bring those images onto the Verify site, see what came off the camera, see what the edits looked like and who made the edits.”

SEE: Phishing, deepfakes, and ransomware: How coronavirus-related cyberthreats will persist in 2021 (TechRepublic) 

Not every edit made to an image is deceptive, like fixing the contrast and cropping out a streetlight. “There are lots of legitimate transformations that are applied to media as it flows through a distribution channel for very good reasons,” England pointed out. “We’re trying to come up with wording that allows people to do the stuff they genuinely need to do to give you a good media viewing experience without leaving loopholes for letting people do stuff that would be OK according to the rules but end up as being misleading.”

Creators can also choose not to document all the changes made to an image or video if they have what they consider good reasons for redacting some information, like blurring the faces of people in the crowd behind someone being arrested, Parsons said. “We take some footage in the moment to prove a point, to tell a story, to reveal an atrocity, and if you have identifiable people in that photo it’s not editorialising to blur faces. We have to be sure that there’s no way through any provenance technology to track back to the thumbnail that has the unblurred faces, because that would unintentionally put people at risk in a way that the standard should not allow.”

C2PA lets creators include their own assertions about content, like that they’re the copyright holder. Again, that’s about trusting the creator rather than turning Adobe into the guarantors of copyright but being able to have a cryptographically verifiable way to attach that to an image will be an improvement on what Parsons called “the wild west of current metadata which can be easily co-opted removed, stripped, changed [or] edited.”

“We don’t want to be the arbiter of trust or the arbiter of trustworthiness; we don’t want to be the ones who dole out certificates for those who meet certain criteria,” he noted (although coalition partners may have to take that role initially to jump start the system). “Journalists don’t have to sign up for the Adobe trustlet or a Microsoft equivalent or a centralised authority of trust.

“Provenance allows you to decide which entities you choose to trust, and basically to hold them to account.”

There’s no single ecosystem or political stance here, England emphasized. “We believe there’s as much right for the Daily Mail or OAN to have provenance information embedded so that their users that trust them can make sure that they are receiving unchanged media as any other site. We want this to be broadly available to everyone depending upon whether you’re BBC, or depending upon whether you’re Joe in Azerbaijan, who is a citizen journalist.”

Trust and verify

Over the next two to five years, the coalition hopes that photographers, videographers, photo editors, journalists, newsrooms, CMS builders, social media platforms, smartphone and camera manufacturers and software vendors will adopt C2PA as an opt-in way of including the provenance of pictures and videos. “It’s about empowering folks who want to offer transparency and the end game is that consumers will come to expect this to accompany all their content in certain scenarios,” Parsons said.

Whether it’s buying art or reading the news, he hopes we will come to expect to see provenance, and to make judgements about what we’re seeing based on how it’s been edited. “When it’s not there you’d look with some scepticism at the media that doesn’t carry it; and if it is there, and it indicates that AI tools were used and you happen to be in news consumption mode on a social media platform, you can also look at that content with increased scepticism about transparency.”

“We think of the watermarks as breadcrumbs that would allow the recipient of a modified video to go back and get a good idea of what the original video was,” England added. “You can imagine comparison services that would say this appears to be the same video. You’d be able to look at not just the information of the current story but the history of the story as it flowed through syndication. You can imagine a scenario where Al Jazeera syndicated a story to the BBC, and you can know that the BBC picked it up but also that they edited it from the original.”

SEE: AI-enabled future crimes ranked: Deepfakes, spearphishing, and more (TechRepublic)chRepublic) 

Further down the line, if you’re recording a video to make an insurance claim or uploading photos of your house to rent it out on Airbnb, those services might ask you to turn on C2PA to prove you’re using your own photo or video and it hasn’t been edited.

Enterprises might want to mark their digital assets with C2PA and make sure they’re not manipulated to damage their reputation. C2PA might also be useful in machine learning, for proving that the training data or the trained model hasn’t been manipulated.

Adobe has shown a prototype in a private beta of Photoshop that lets designers include attribution; that’s likely to ship by the end of this year. “This will enable folks to turn on the content authenticity feature in Photoshop to capture information, at their discretion, to opt in to export images after editing in Photoshop that will carry along other CTPA claims and assertions, to bundle them cryptographically, to sign them with an Adobe certificate and then prepare them for downstream consumption.”

But once the C2PA draft standard is published later this year, under an open licencing model, browsers and social media platforms and software creators will start building it into products, and they won’t necessarily use Adobe or Microsoft services for that.

Creating the hashes, adding the digital signatures and distributing them with content isn’t a technically hard problem, England said.

“How hard it is to create manifests and distribute those manifests as the media flows through the web depends on the participation of a fairly complicated ecosystem. The big players here are of course the social media platforms, so we were looking for ways that we could still make progress, even if we did not have the participation of these intermediaries between where the media is authored—say, published on the BBC site—and where it is consumed in people’s browsers and apps.”

Twitter joining C2PA is a good sign for adoption, and there has been a lot of interest in the standard. So, while AMP won’t do away with misinformation, and the ecosystem will take a while to grow, England notes that “one of the things I’ve learned from doing computer security in nearly 30 years is that the best is the enemy of the good.

“If we could get the large social media platforms and distribution networks on board with this technology, I think we will have made a huge change. There will still be people that abuse the system to mislead, but if it’s not the big players, maybe we’ll achieve some good for the world in combating misinformation.”

Also see



Source link