1 in 5 top companies mention generative AI in their financial reports, but not in a good way


OsakaWayne Studios/Getty Images

References to generative AI are popping up in more corporate financial statements, but not necessarily with a positive sense of intelligence and transformation. Among companies discussing the implications of generative AI, seven in ten cite its potential risks to competitive position and security, and the spread of misinformation.

That’s the conclusion of an analysis of annual financial reports (10-Ks) of US-based Fortune 500 companies from data as of May 1, 2024. The research compared the content of the companies’ reports with information from 2022 and searched for terms such as artificial intelligence (AI), machine learning, large language models, and generative AI. 

Also: How your business can best exploit AI: Tell your board these 4 things

More than one in five companies (22%) mentioned generative AI or large language models in their financial reports, the analysis by technology specialist Arize found. This proportion represented a 250% increase in the number of mentions of AI in these reports since 2022. 

Public companies are mandated to discuss known or potential risks in their financial disclosures, so this is a factor in the high degree of not-so-positive mentions for generative AI. However, the growth also illustrates the concerns arising from emerging technology. 

Also: Agile development can unlock the power of generative AI – here’s how

Close to seven in 10 financial statements mentioned generative AI in the context of risk of disclosures, whether that risk is through the use of emerging technology or as an external competitive or security threat to the business. At least 281 companies (56%) cited AI as a potential risk factor, up 474% over 2022.

Only 31% of companies that mentioned generative AI in their reports cited its benefits. Many organizations are potentially missing an opportunity to pitch AI adoption to investors. “While many enterprises likely err on the side of disclosing even remote AI risks for regulatory reasons, in isolation such statements may not accurately reflect an enterprise’s overall vision,” the Arize authors pointed out.  

The risks weren’t necessarily just within the context of bias, security, or other AI maladies. Failure to keep current with AI developments was cited as a risk factor, as noted in S&P Global’s 10-K filing from December 31, 2023. “Generative artificial intelligence may be used in a way that significantly increases access to publicly available free or relatively inexpensive information,” the statement read. “Public sources of free or relatively inexpensive information can reduce demand for our products and services.”

Also: Time for businesses to move past generative AI hype and find real value

Another risk, reputational damage, was cited in Motorola’s 10-K filing. “As we increasingly build AI, including generative AI, into our offerings, we may enable or offer solutions that draw controversy due to their actual or perceived impact on social and ethical issues resulting from the use of new and evolving AI in such offerings,” their financial statement said. 

“AI may not always operate as intended and datasets may be insufficient or contain illegal, biased, harmful or offensive information, which could negatively impact our results of operations, business reputation or customers’ acceptance of our AI offerings.”

Motorola indicated that it maintains AI governance programs and internal technology oversight committees, but “we may still suffer reputational or competitive damage as a result of any inconsistencies in the application of the technology or ethical concerns, both of which may generate negative publicity.”

Also: When’s the right time to invest in AI? 4 ways to help you decide

However, there is some positive news from the research. Generative AI was seen by at least one-third of organizations in a more positive light, as Quest Diagnostics cited in its financial filing: “In 2023, we created an initiative to deploy generative AI to improve several areas of our business, including software engineering, customer service, claims analysis, scheduling optimization, specimen processing and marketing. We expect to further develop these projects in 2024.”

Quest also noted it seeks to align its AI practices with the NIST AI Risk Management Framework and to “strategically partner with external AI experts as needed to ensure we remain informed about the latest technological advancements in the industry.”

On an even more positive and forward-looking note, Quest stated that “we believe generative AI will help us innovate and grow in a responsible manner while also enhancing customer and employee experiences and bring cost efficiencies. We intend to continue to be at the forefront of the innovative, responsible and secure use of AI, including generative AI, in diagnostic information solutions.”





Source link

Leave a Comment