- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
Google Expands Content Watermarking Tool to AI-Generated Text
Google has unveiled a new method to label text as AI-generated without altering it.
This new feature, announced on May 14, has been integrated into Google DeepMind’s SynthID tool, which was already capable of identifying AI-generated images and audio clips.
This method introduces additional information to the large language model (LLM)-based tool while generating text. This action is invisible to the user.
Traditionally, an LLM generates texts by predicting the most probable following words one by one. The characters, words and groups of words are broken down into single entities called ‘tokens.’ Each possible new token is assigned a probability score, and the token with the highest score is generated.
Google’s SynthID for text calculates an adjusted probability score with the additional information provided.
The combination of the token’s scores from both the LLM and SynthID is considered the watermark.
“This pattern of scores is compared with the expected pattern of scores for watermarked and unwatermarked text, helping SynthID detect if an AI tool generated the text or if it might come from other sources,” Google explained in a blog post.
The tech giant explained that although this technique isn’t designed to stop motivated adversaries like cyber attackers or hackers from causing harm, “it can make it harder to use AI-generated content for malicious purposes.”
SynthID for AI-generated text has been deployed on Google AI chatbot Gemini.
Read more: OpenAI Announces Plans to Combat Misinformation Amid 2024 Elections
Limitations of SynthID for Text
Google said this AI watermarking method is more flexible than classifier-based ones, which “often only perform well on particular tasks.”
However, the SynthID text watermarking feature also has its limitations.
For instance, it works better for longer generated texts – “like when [an LLM is] prompted to generate an essay, a theater script or variations on an email” – than for prompts asking for factual responses that imply fewer variations.
Additionally, the method performs well even when the text has been mildly transformed (e.g. cropped or partly modified), but less so when it has been significantly rewritten or translated into another language.
Finally, the tech giant recommends combining this method with other AI-generated text watermarking methods.
SynthID Expands to Watermark AI-Generated Videos
In the same blog post, Google explained that SynthID can now also watermark AI-generated videos, a feature announced at Google I/O on May 14.
Building on SynthID for AI-generated images, the technique embeds a watermark directly into the pixels of every video frame, making it imperceptible to the human eye, but detectable for identification.
Google has started using SynthID to label every video generated by its AI-generated video tool Veo that is published on its AI video platform VideoFX.
Read more: Google to Restrict Election-Related Answers on AI Chatbot Gemini
An LLM-based Scam Call Alert
Google also announced during its I/O conference that it was testing a real-time scam alert tool for users placing a call.
Based on Gemini Nano, Google’s light LLM tool for on-device tasks, this new feature provides real-time alerts during a call if it detects conversation patterns commonly associated with scams.
Google added that it will release more information about this new feature later this year.