Meta Pauses European GenAI Development Over Privacy Concerns
Meta has delayed plans to train its large language models (LLMs) using public content shared on Facebook and Instagram following a request by the Irish Data Protection Commission (DPC).
The DPC request follows data privacy concerns over the use of information shared on these platforms, such as public posts or comments.
Meta expressed its disappointment at the request, describing it as “a step backwards” for innovation and competition in AI development in Europe.
The social media giant said it had incorporated regulatory feedback and kept European Data Protection Authorities (DPAs) informed of its plans since March 2024.
“We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts,” the firm commented.
Meta’s AI assistant, Meta AI, cannot currently be launched in Europe as a result of the pause.
The tech company will use the delay to work collaboratively with the DPC to address the privacy concerns outlined. Additionally, it will look to address specific requests received from the UK’s Information Commissioner’s Office (ICO) ahead of starting the training.
In an earlier blog post on June 10, 2024, Meta defended its plans to train its LLMs with public content shared by adults on Facebook and Instagram, arguing that this information is required for the models and AI features they power to accurately understand important regional languages, cultures or trending topics on social media.
Decision to Delay Welcomed by Regulators
The Irish DPC said in a short statement that it welcomed Meta’s decision to pause its plans to train its LLMs with public content shared on its social media platforms.
The regulator added that the decision followed intensive engagement between the DPC and Meta, and this engagement will continue in co-operation with other EU data protection authorities.
The ICO said it was pleased that Meta has reflected the concerns it shared from users of their service in the UK.
Stephen Almond, executive director, regulatory risk at the ICO, commented: “In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.
“We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”
Image credit: QubixStudio / Shutterstock.com