- Why this $60 Android Auto wireless adapter is my favorite tech accessory this year
- Red Hat Enterprise Linux 9.5 gains security, networking upgrades
- SUSE unveils major rebranding, and a new AI platform that protects your data
- One of the most immersive speakers I've tested is not made by Sonos or JBL (and it's on sale)
- Palo Alto Networks Confirms New Zero-Day Being Exploited by Threat Act
Assessing the business risk of AI bias
AI doesn’t get better than the data it’s trained on. This means that biased selection and human preferences can propagate into the AI and cause the results that come out to be skewed.
In the US, authorities are now using new laws to enforce instances of discrimination due to prejudicial AI, and the Consumer Financial Protection Bureau currently investigates housing discrimination due to biases in algorithms for lending or housing valuation.
“There is no exception in our nation’s civil rights laws for new technologies and artificial intelligence that engage in unlawful discrimination,” said its director Rohit Chopra recently on CNBC.
And many CIOs and other senior managers are aware of the problem, according to an international survey commissioned by Swedish software supplier Progress. In the survey, 56% of Swedish managers stated they believe there’s definitely or probably discriminatory data in their operations today, while 62% also believe or think it’s likely such data will become a bigger problem for their business as AI and ML become more widely used.
Elisabeth Stjernstoft, CIO at Swedish energy giant Ellevio, agrees that there’s a risk of using biased data that’s not representative of the customer group or population being looked at.
“It can, of course, affect AI’s ability to make accurate predictions,” she says. “We have to look at the data on which the model is trained, but also at how the algorithms are designed and the selection of functions. The bottom line is the risk is there, so we need to monitor the models and correct them if necessary.”