Singapore releases guidelines for securing AI systems and prohibiting deepfakes in elections


alexsl/Getty Images

Singapore made a slew of cybersecurity announcements this week, including guidelines on securing artificial intelligence (AI) systems, a safety label for medical devices, and new legislation that prohibits deepfakes in elections advertising content.

Its new Guidelines and Companion Guide of Securing AI Systems aim to push a secure by-design approach, so organizations can mitigate potential risks in the development and deployment of AI systems. 

Also: Can AI and automation properly manage the growing threats to the cybersecurity landscape?

“AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system,” said Singapore’s Cyber Security Agency (CSA). “The adoption of AI can also exacerbate existing cybersecurity risks to enterprise systems, [which] can lead to risks such as data breaches or result in harmful, or otherwise undesired model outcomes.”

“As such, AI should be secure by design and secure by default, as with all software systems,” the government agency said. 

Also: AI anxiety afflicts 90% of consumers and businesses – see what worries them most

It noted that the guidelines identify potential threats, such as supply chain attacks, and risks such as adversarial machine learning. Developed with reference to established international standards, they include principles to help practitioners implement security controls and best practices to protect AI systems.

The guidelines cover five stages of the AI lifecycle, including development, operations and maintenance, and end-of-life, the latter of which highlights how data and AI model artifacts should be disposed of. 

Also: Cybersecurity professionals are turning to AI as more lose control of detection tools

To develop the companion guide, CSA said it worked with AI and cybersecurity professionals to provide a “community-driven resource” that offers “practical” measures and controls. This guide also will be updated to keep up with developments in the AI security market. 

It comprises case studies, including patch attacks on image recognition surveillance systems. 

However, because the controls mainly address cybersecurity risks to AI systems, the guide does not address AI safety or other related components, such as transparency and fairness. Some recommended measures, though, may overlap, CSA said, adding that the guide does not cover the misuse of AI in cyberattacks, such as AI-powered malware or scams, such as deepfakes. 

Also: Cybersecurity teams need new skills even as they struggle to manage legacy systems

Singapore, however, has passed new legislation outlawing the use of deepfakes and other digitally generated or manipulated online election advertising content. 

Such content depicts candidates saying or doing something they did not say or do but is “realistic enough” for members of the public to “reasonably believe” the manipulated content to be real. 

Deepfakes banned from election campaigns

The (Elections Integrity of Online Advertising) (Amendment) Bill was passed after a second reading in parliament and also addresses content generated using AI, including generative AI (Gen AI), and non-AI tools, such as splicing, said Minister for Digital Development and Information Josephine Teo. 

“The Bill is scoped to address the most harmful types of content in the context of elections, which is content that misleads or deceives the public about a candidate, through a false representation of his speech or actions, that is realistic enough to be reasonably believed by some members of the public,” Teo said. “The condition of being realistic will be objectively assessed. There is no one-size-fits-all set of criteria, but some general points can be made.”

Also: A third of all generative AI projects will be abandoned, says Gartner

These encompass content that “closely match[es]” the candidates’ known features, expressions, and mannerisms, she explained. The content also may use actual persons, events, and places, so it appears more believable, she added. 

Most in the general public may find content showing the Prime Minister giving investment advice on social media to be inconceivable, but some still may fall prey to such AI-enabled scams, she noted. “In this regard, the law will apply so long as there are some members of the public who would reasonably believe the candidate did say or do what was depicted,” she said.

Also: All eyes on cyberdefense as elections enter the generative AI era

These are the four components that must be met for content to be prohibited under the new legislation: has an online elections ad been digitally generated or manipulated, and depicts candidates saying or doing something they did not, and is realistic enough to be deemed by some in the public to be legitimate. 

The bill does not outlaw the “reasonable” use of AI or other technology in electoral campaigns, Teo said, such as memes, AI-generated or animated characters, and cartoons. It also will not apply to “benign cosmetic alterations” that span the use of beauty filters and adjustment of lighting in videos. 

Also: Think AI can solve all your business problems? Apple’s new study shows otherwise

The minister also noted that the Bill will not cover private or domestic communications or content shared between individuals or within closed group chats.

“That said, we know that false content can circulate rapidly on open WhatsApp or Telegram channels,” she said. “If it is reported that prohibited content is being communicated in big group chats that involve many users who are strangers to one another, and are freely accessible by the public, such communications will be caught under the Bill and we will assess if action should be taken.”

Also: Google unveils $3B investment to tap AI demand in Malaysia and Thailand

The law also does not apply to news published by authorized news agencies, she added, or to the layperson who “carelessly” reshares messages and links not realizing the content has been manipulated. 

The Singapore government plans to use various detection tools to assess whether the content has been generated or manipulated using digital means, Teo explained. These include commercial tools, in-house tools, and tools developed with researchers, such as the Centre of Advanced Technologies in Online Safety, she said.

Also: OpenAI sees new Singapore office supporting its fast growth in the region

In Singapore, corrective directions will be issued to relevant persons, including social media services, to remove or disable access to prohibited online election advertising content. 

Fines of up to SG$1 million may be issued for a provider of a social media service that fails to comply with a corrective direction. Fines of up to SG$1,000 or imprisonment of up to a year, or both, may be meted out to all other parties, including individuals, that fail to comply with corrective directions. 

Also: AI arm of Sony Research to help develop large language model with AI Singapore

“There has been a noticeable increase of deepfake incidents in countries where elections have taken place or are planned,” Teo said, citing research from Sumsub that estimated a three-fold increase in deepfake incidents in India and more than 16 times in South Korea, compared to a year ago. 

“AI-generated misinformation can seriously threaten our democratic foundations and demands an equally serious response,” she said. The new Bill will ensure the “truthfulness of candidate representation” and integrity of Singapore’s elections can be upheld, she added.

Is this medical device adequately secured? 

Singapore is also looking to help users procure medical devices that are adequately secured. On Wednesday, CSA launched a cybersecurity labeling scheme for such devices, expanding a program that covers consumer Internet of Things (IoT) products. 

The new initiative was jointly developed with the Ministry of Health, Health Sciences Authority, and national health-tech agency, Synapxe. 

Also: Singapore looks for ‘practical’ medical breakthroughs with new AI research center

The label is designed to indicate the level of security in medical devices and enable healthcare users to make informed buying decisions, CSA said. The program applies to devices that handle personally identifiable information and clinical data, with the ability to collect, store, process, and transmit the data. It also applies to medical equipment that connects to other systems and services and can communicate via wired or wireless communication protocols. 

Products will be assessed based on four levels of rating, Level 1 medical devices must meet baseline cybersecurity requirements, Level 4 systems must have enhanced cybersecurity requirements, and must also pass independent third-party software binary analysis and security evaluation. 

Also: These medical IoT devices carry the biggest security risks

The launch comes after a nine-month sandbox phase that ended in July 2024, during which 47 applications from 19 participating medical device manufacturers put their products through a variety of tests. These include in vitro diagnostic analyzers, software binary analysis, penetration testing, and security evaluation. 

Feedback gathered from the sandbox phase was used to finetune the scheme’s operational processes and requirements, including providing more clarity on the application processes and assessment methodology. 

Also: Asking medical questions through MyChart? Your doctor may let AI respond

The labeling program is voluntary, but CSA has called for the need to take “proactive measures” to safeguard against growing cyber risks, especially as medical devices increasingly connect to hospital and home networks. 

Medical devices in Singapore currently must be registered with HSA and are subject to regulatory requirements, including cybersecurity, before they can be imported and made available in the country. 

Also: AI is relieving therapists from burnout. Here’s how it’s changing mental health

CSA in a separate announcement said the cybersecurity labeling scheme for consumer devices is now recognized in South Korea

The bilateral agreements were inked on the sidelines of this week’s Singapore International Cyber Week 2024 conference, with the Korea Internet & Security Agency (KISA) and the German Federal Office for Information Security (BSI). 

Scheduled to take effect from January 1 next year, the South Korean agreement will see KISA’s Certification of IoT Cybersecurity and Singapore’s Cybersecurity Label mutually recognized in either country. It marks the first time an Asia-Pacific market is part of such an agreement, which Singapore also has inked with Finland and Germany.

Also: Hooking up generative AI to medical data improved usefulness for doctors

South Korea’s certification scheme encompasses three levels — Lite, Basic, and Standard — with third-party lab tests required across all. Devices issued with Basic Level will be deemed to have acquired Level 3 requirements of Singapore’s labeling scheme, which has four rating levels. KISA, too, will recognize Singapore’s Level 3 products as having fulfilled its Basic level certification. 

The labels will apply to consumer smart devices, including home automation, alarm systems, and IoT gateways.





Source link