- The 25+ best Black Friday Nintendo Switch deals 2024
- The 70+ best Black Friday TV deals 2024: Save up to $2,000
- This AI image generator that went viral for its realistic images gets a major upgrade
- One of the best cheap Android phones I've tested is not a Motorola or Samsung
- The best VPN services for iPhone: Expert tested and reviewed
Ethical AI – How is AI Redefining the Insurance Industry?
By Antoine de Langlois, Responsible AI Data Scientist at Zelros
When Cybersecurity meets Artificial Intelligence, companies and organizations have many new challenges and potential threats to consider. Here are some examples of how AI can help combat a cyber-attack.
Adversarial attack
In the case of a Vision ML model, tasked to detect a panda, you can modify the input image to fool the algorithm into predicting a gibbon. While the image remains strictly identifiable as a panda for the human eye. It could fool an autonomous car into mis-identifying a stop sign with a speed limit road sign, with critical consequences. It can also be developed for speech recognition. Here again the sound change could not be detected by human ears but will fool the speech recognition device
To counter adversarial attacks, companies need to retrain the algorithm to ensure it detects and flags anomalies by proactively detecting the breaches and retraining the algorithm helps minimize such attacks. To have a robust model, companies can also “sample with noise” to help prevent a future adversarial attack.
Data Poisoning
Data poisoning happens when some samples of the data used for training the algorithm are manipulated to make it provide a malicious prediction triggered by specific inputs, while remaining accurate for all the other inputs.
This Data Poisoning manipulation is done before the model training step. Zelros has Ethical Report standard, collecting a dataset signature on the successive steps of modelization is precisely to check and prove afterwards the data has not been tampered with. This standard can be adapted by other companies as a best practice when using AI responsibly.
Privacy
When an individual or a group have very specific features within the dataset used to train an algorithm, their identity may be compromised. To avoid an individual identity to be revealed as part of the training data and thus a risk on their privacy, organizations can use specific techniques such as federated learning. It amounts to training individual models locally and federating them on a global level, to keep the personal data locally. As a general advice, detecting specific samples of outliers and excluding them from the training is also a good practice.
Bias Bounties
As for classical software, sharing details of an AI algorithm can become a liability if it is exploited with malicious intent, since it provides insights on the model structure. A countermeasure, evoked by Forrester as a trend for 2022 are bias bounties, which will help AI software companies strengthen their algorithm robustness.
“At least 5 large companies will introduce bias bounties in 2022.” – According to Forrester: North American Predictions 2022 Guide
Bias bounties are becoming a prime tool for ethical and responsible AI because they help ensure your algorithm is as unbiased and as accurate as possible, thanks to having more people review it.
Human Behavior
Before considering malicious activity to access our Data or manipulate the AI tool used, companies ought to pause and just consider the Personal Data we as people willingly (even if not knowingly) share. Our CyberSecurity main weakness is our proclivity to disseminate knowledge of our identity and activity. Artificial Intelligence or even basic data gathering tools have given this behavior consequences that may prove critical.
Let’s take an old example for reference, with the geo localization data openly shared on a social network:
https://www.nytimes.com/interactive/2018/12/10/business/location-data-privacy-apps.html
Although from 2018, it shows how individual scraps of data may be gathered to provide powerful insights on an individual Person identity and behavior.
These insights can then be leveraged by AI tools to categorize ‘potential customer targets’ and act on that intel. A more recent reference may be The Social Dilemma documentary about the world of the “attention economy” built on this Personal Data gathering. To limit the impact of our Human behavior, nothing beats culture and scientific awareness. Data Science acculturation is a key for more security on our private data but also for more fairness in AI models, as detailed in the first topic of this article.
“AI tools may be too powerful for our own good”: When provided data on customers, a Machine Learning model may learn much more than we would like it to. For example, even if the gender is not explicit in customer data, the algorithm can infer it through proxy features, when a Human could not (at least on that amount of data, in such a limited time).
For that aspect, analyzing and monitoring the ML model is crucial.
To better anticipate the algorithm/model behavior and prevent discrimination through proxies, a key element is diversity: Having multiple reviewers with complementary input through their individual cultural/ethical background. Organizations can also request algorithmic audits by Third parties, to take advantage of their expertise and workforce diversity if the team themselves lack diversity.
References:
To develop for Insurance domain + North America reference:
Link with American NAIC (National Association of Insurance Commissioners) principles compliance challenge: Fair and Ethical / Accountable / Compliant / Secure,Safe, Robust
Sources for adversarial attacks (very technical – not for the article but for sourcing reference):
About the Author
Antoine de Langlois is Zelros’ data science leader for Responsible AI. Antoine has built a career in IT governance, data and security and now ethical AI. Prior to Zelros he held multiple technology roles at Total Energies and Canon Communications. Today he is a member of Impact AI and HUB France AI. Antoine graduated from CentraleSupelec University, France.
FAIR USE NOTICE: Under the “fair use” act, another author may make limited use of the original author’s work without asking permission. Pursuant to 17 U.S. Code § 107, certain uses of copyrighted material “for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.” As a matter of policy, fair use is based on the belief that the public is entitled to freely use portions of copyrighted materials for purposes of commentary and criticism. The fair use privilege is perhaps the most significant limitation on a copyright owner’s exclusive rights. Cyber Defense Media Group is a news reporting company, reporting cyber news, events, information and much more at no charge at our website Cyber Defense Magazine. All images and reporting are done exclusively under the Fair Use of the US copyright act.