- How to detect this infamous NSO spyware on your phone for just $1
- I let my 8-year-old test this Android phone for kids. Here's what you should know before buying
- 3 lucrative side hustles you can start right now with OpenAI's Sora video generator
- How to use Microsoft's Copilot AI on Linux
- Protect 3 Devices With This Maximum Security Software
Artificial intelligence, real anxiety: Why we can't stop worrying and love AI
Did an AI write this piece?
Questions like this were a friendly quip when generative artificial intelligence (gen AI) began its foray into mainstream discourse. Two years later, while people around the globe use AI for all kinds of activities, others are raising important questions about the emerging technology’s long-term impact.
Last month, fans of the popular South Korean band Seventeen took issue with a BBC article that wrongly implied the group had used AI in its songwriting. Woozi, a band member and the main creative brain behind most of the band’s music, told reporters he had experimented with AI to understand the development of the technology and identify its pros and cons.
Also: Lost in translation: AI chatbots still too English-language centric, Stanford study finds
BBC misconstrued the experimentation to suggest Seventeen had used AI in its latest album release. Unsurprisingly, the mistake caused a furor, with fans taking particular offense because Seventeen has been championed as a “self-producing” band since its musical debut. Its 13 members are involved in the group’s songwriting, music production, and dance choreography.
Their fans saw the AI tag as discrediting the group’s creative minds. “[Seventeen] write, produce, choreograph! They are talented… and definitely are not in need of AI or anything else,” one fan said on X, while another described the AI label as an insult to the group’s efforts and success.
The episode prompted Woozi to post on his Instagram Stories: “All of Seventeen’s music is written and composed by human creators.”
Women, peace, and security
Of course, AI as a perceived affront to human creativity isn’t the only concern about this technology’s ever-accelerating impact on our world — and arguably far from the biggest concern. Systemic issues surrounding AI could — potentially — threaten the safety and well-being of huge swaths of the world’s population.
Specifically, as the technology is adopted, AI can put women’s safety at risk, according to recent research from UN Women and the UN University Institute Macau (UNU Macau). The study noted that gender biases across popular AI systems pose significant obstacles to the positive use of AI to support peace and security in regions such as Southeast Asia.
The May 2024 study analyzed links between AI; digital security; and women, peace, and security issues across Southeast Asia. AI is expected to boost the region’s gross domestic product by $1 trillion in 2030.
Also: AI risks are everywhere – and now MIT is adding them all to one database
“While using AI for peace purposes can have multiple benefits, such as improving inclusivity and the effectiveness of conflict prevention and tracking evidence of human rights breaches, it is used unequally between genders, and pervasive gender biases render women less likely to benefit from the application of these technologies,” the report said.
Efforts should be made to mitigate the risks of using AI systems, particularly on social media, and in tools such as chatbots and mobile applications, according to the report. Efforts also should be made to drive the development of AI tools to support “gender-responsive peace.”
The research noted that tools enabling the public to create text, images, and videos have been made widely available without consideration of their implications for gender or national or international security.
Also: If these chatbots could talk: The most popular ways people are using AI tools
“Gen AI has benefited from the publishing of large language models such as ChatGPT, which allow users to request text that can be calibrated for tone, values, and format,” it said. “Gen AI poses the risk of accelerating disinformation by facilitating the rapid creation of authentic-seeming content at scale. It also makes it very easy to create convincing social media bots that intentionally share polarizing, hateful, and misogynistic content.”
The research cited a 2023 study in which researchers from the Association for Computational Linguistics found that when ChatGPT was provided with 100 false narratives, it made false claims 80% of the time.
The UN report highlighted how researchers worldwide have cautioned about the risks of deepfake pornography and extremist content for several years. However, recent developments in AI have escalated the severity of the problem.
“Image-generating AI systems have been shown to easily produce misogynistic content, including creating sexualized bodies for women based on profile pictures or images of people performing certain activities based on sexist and racist stereotypes,” the UN Women report noted.
“These technologies have enabled the easy and convincing creation of deepfake videos, where false videos can be created of anyone based only on photo references. This has caused significant concerns for women, who might be shown, for example, in fake sexualized videos against their consent, incurring lifelong reputational and safety-related repercussions.”
When real-world fears move online
A January 2024 study from information security specialist CyberArk also suggested concerns about the integrity of digital identities are on the rise. The survey of 2,000 workers in the UK revealed that 81% of employees are worried about their visual likeness being stolen or used to conduct cyberattacks, while 46% are concerned about their likeness being used in deepfakes.
Specifically, 81% of women are concerned about cybercriminals using AI to steal confidential data via digital scams, higher than 74% of men who share similar concerns. More women (46%) also worry about AI being used to create deepfakes, compared to 38% of men who feel this way.
CyberArk’s survey found that 50% of women are anxious about AI being used to impersonate them, higher than 40% of men who have similar concerns. What’s more, 59% of women are worried about AI being used to steal their personal information, compared to 50% of men who feel likewise.
Also: Millennial men are most likely to enroll in gen AI upskilling courses, report shows
I met with CyberArk COO Eduarda Camacho, and our discussion touched upon why women harbored more anxiety about AI. Shouldn’t women feel safer on digital platforms because they don’t have to expose their characteristics, such as gender?
Camacho suggested that women may be more aware of the risks online and these concerns could be a spillover from the vulnerabilities some women feel offline. She said women are typically more targeted and exposed to online abuse and misinformation on social media platforms.
The anxiety isn’t unfounded, either. Camacho said AI can significantly impact online identities. CyberArk specializes in identity management and is particularly concerned about this issue.
Specifically, deepfakes can be difficult to detect as technology advances. While 70% of organizations are confident their employees can identify deepfakes of their leadership team, Camacho said this figure is likely an overestimation, referring to evidence from CyberArk’s 2024 Threat Landscape Report.
Also: These experts believe AI can help us win the cybersecurity battle
A separate July 2024 study from digital identity management vendor Jumio found 46% of respondents believed they could identify a deepfake of a politician. Singaporeans are the most certain, at 60%, followed by people from Mexico at 51%, the US at 37%, and the UK at 33%.
Allowed to run rampant and unhinged on social media platforms, AI-generated fraudulent content can lead to social unrest and detrimentally impact societies, including vulnerable groups. This content can spread quickly when shared by personalities with a significant online presence.
Research last week revealed that Elon Musk’s claims about the US elections — claims that were flagged as false or misleading — had been viewed almost 1.2 billion times on his social media platform X, according to research from the Center for Countering Digital Hate (CCDH). From January 1 to July 31, CCDH analyzed Musk’s posts about the elections and identified 50 posts that fact-checkers had debunked.
Musks’s post on an AI-generated audio clip featuring US presidential nominee Kamala Harris clocked at least 133 million views. The post wasn’t tagged with a warning label, breaching the platform’s policy that says users should “not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm,” CCDH said.
“The lack of Community Notes on these posts shows [Musk’s] business is failing woefully to contain the kind of algorithmically-boosted incitement that we all know can lead to real-world violence, as we experienced on January 6, 2021,” said CCDH CEO Imran Ahmed. “It is time Section 230 of the [US] Communications Decency Act 1986 was amended to allow social media companies to be held liable in the same way as any newspaper, broadcaster or business across America.”
Also disconcerting is how the tech giants are jockeying for even greater power and influence.
“Watching what’s happening in Silicon Valley is insane,” American businessman and investor Mark Cuban said in an interview on The Daily Show. “[They’re] trying to put themselves in a position to have as much control as possible. It’s not a good thing.”
“They’ve lost the connection with the real world,” Cuban said.
Also: Elon Musk’s X now trains Grok on your data by default – here’s how to opt out
He also said the online reach of X gives Musk the ability to connect to political leaders globally, along with an algorithm that depends on what Musk likes.
When asked where he thought AI is heading, Cuban pointed to the technology’s rapid evolution and said it remains unclear how large language models will drive future developments. While he believes the impact will be generally positive, he said there are a lot of uncertainties.
Act before AI’s grip tightens beyond control
So, how should we proceed? First, we should move past the misconception that AI is the solution to life’s challenges. Businesses are just starting to move beyond that hyperbole and are working to determine the real value of AI.
Also, we should appreciate that, amid the desire for AI-powered hires and productivity gains, some level of human creativity is still valued above AI — as Seventeen and the band’s fans have made abundantly clear.
For some, however, AI is embraced as a way to cross language barriers. Irish boy band Westlife, for instance, released their first Mandarin title, which was performed by their AI-generated vocal representatives and dubbed AI Westlife. The song was created in partnership with Tencent Music Entertainment Group.
Also: Nvidia will train 100,000 California residents on AI in a first-of-its-kind partnership
Most importantly, as the UN report urges, systemic issues with AI must be addressed — and these concerns aren’t new. Organizations and individuals alike have repeatedly highlighted these challenges, along with multiple calls for the necessary guardrails to be put in place. Governments will need the proper regulations and enforcements to rein in the delinquents.
And they must do so quickly before AI’s grip tightens beyond control and all of society, not just women, are faced with lifelong safety repercussions.