- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
AI And Ad Fraud: Growing Risks for Marketers Using Google’s AI-Based Advertising Campaigns
Google’s Performance Max (PMax) – AI has ignited a flurry of concerns relating to data protection and security, and organisations must act now to prevent further damaging losses.
By Mathew Ratty, CEO, TrafficGuard
Artificial intelligence (AI) is transforming the marketing industry. While there are many benefits to using AI in digital campaigns, recent revelations surrounding Google’s Performance Max (PMax) – AI has ignited a flurry of concerns relating to data protection and security.
When Google announced PMax, it appeared to be the answer to every marketer’s dream – driving marketing efficiency, performance, and better ROI across all of Google’s channels, including YouTube, search, shopping, and discovery. Since this launch, questions have been raised regarding its ability to adhere to stringent data privacy laws.
In a recent data privacy breach, YouTube may have inadvertently shown adverts to children. Not only does this spark concerns around the violation of the Children’s Online Privacy Protection Act (COPPA) but it causes a ripple effect for advertisers seeking to optimise their returns.
There is still much to understand and learn about AI, and it is crucial that organisations are aware of the risks. With full transparency into the algorithms and tools to combat potential fraudsters, organisations can effectively protect themselves and avoid breaching data privacy.
Threat Actors Taking Advantage of AI Vulnerabilities
AI systems can be incredibly efficient at managing large amounts of information on behalf of data analysts. The problem with this is AI systems like PMax have difficulty differentiating between positive user engagement, and more malicious actions taken by fraudsters.
The challenge with PMax is all user engagement is viewed as positive or legitimate, and threat actors are exploiting this algorithm. Fraudsters are capable of creating fake intent signals, which trick systems into thinking the signal is a user with a legitimate interest in engaging with the site. To accomplish this, fraudsters create numerous bots to flood systems with fake engagement. This leads to the AI algorithm optimising toward the source of the invalid traffic, resulting in wrongly optimised campaigns that divert and deplete advertising budgets by driving more fake engagement.
Fraudsters are also targeting potential weaknesses in the data privacy of AI platforms like PMax. Google has implemented multiple features to address data privacy concerns within PMax, such as anonymisation of user data, user controls/preferences to control their data, and ad preferences. PMax aims to uphold strong data privacy measures, but vulnerabilities in the system are still possible, as seen in the recent showing of ads to minors on YouTube.
The vulnerabilities in the system demonstrate the ever-evolving nature of data privacy and the challenge of ensuring it remains complex and secure. Constant vigilance and adaptions are crucial to address potential gaps or flaws within systems. Organisations can greatly benefit from using AI within marketing campaigns, but it’s important to balance its usage with appropriate risk mitigation. Advertisers should not only utilise AI, but also put countermeasures in place to protect their campaigns against evolving fraud tactics.
Preventing Fraudulent Activity
With the big budgets involved in marketing campaigns, fraudsters are always on the lookout for a slice of the profit. Organisations must protect themselves from bad actors getting in the way of achieving campaign success by ensuring they are optimising toward legitimate sources.
By implementing solutions to identify fraudulent bots, and data collection filters, they can effectively prevent fraud while meeting data privacy laws and ultimately maintain campaign control.
Organisations can take the following steps to prevent fraud across marketing campaigns:
- Analyse and Optimise Traffic: AI can be leveraged to combat fraudulent traffic. Through effective analytics and reporting tools, patterns, anomalies or irregularities in traffic can be identified to enable organisations to make better-informed decisions to optimise their traffic. As fraud tactics constantly evolve, AI solutions can be aligned to remain one step ahead. Its predictive abilities enable marketers to proactively identify and prevent fraud before it harms campaigns.
- Data Filtering: It is crucial that organisations stay within data privacy guidelines. Implementing a solution to filter through data enables organisations to tailor their data collection strategy. It is possible to limit or stop data collection altogether post-click, which ensures the data aligns with protection regulations, especially in engagement from minors. Solutions can also minimalise collected data so that only the essentials for fraud identification and campaign optimisation are gathered. This will reduce the risk of overstepping data privacy laws.
The threats posed by fraudsters can be prevented, allowing organisations to make the most of AI systems like PMax. The right security solution or tools will provide organisations with the ability to scan their data in real-time and identify malicious engagement from threat actors which can then be countered, protecting budgets and data alike.
Preserving Campaign Integrity
AI programs are becoming more and more prevalent, and fraudsters are continuously looking for ways to build on their tactics and take advantage. Organisations have the opportunity to take a proactive stance against fraud, and pre-emptively tackle threat actors to preserve the integrity of their campaigns and comply with regulations.
A proactive approach involves leveraging AI’s predictive abilities to identify and prevent fraud before it can harm campaign budgets. By adopting this approach, organisations can fully appreciate the benefits of AI while mitigating the changing threat landscape.
About the Author
Mathew Ratty, a seasoned professional with 7 years in digital ad tech, currently leads as CEO of Adveritas. Formerly part of a mobile ad network, he’s also an avid tech investor with a decade of diverse investments. Under his leadership, Adveritas launched its flagship product, TrafficGuard, using innovative strategies and assembling a top-tier C-level team. Holding a First-Class Honours Finance degree from Curtin University, Australia, Ratty steers TrafficGuard’s mission. This pioneering ad fraud prevention solution employs AI and advanced machine learning, revolutionizing business operations. Trusted by major brands like Disney, Tab Corp, and HelloFresh, TrafficGuard, accessible on Google Cloud Marketplace, upholds transparency and security in digital advertising, setting industry benchmarks. Mat can be reached online at @Mathew Ratty on LinkedIn and at our company website https://www.trafficguard.ai/