- How to Become a Chief Information Officer: CIO Cheat Sheet
- 3 handy upgrades in MacOS 15.1 - especially if AI isn't your thing (like me)
- Your Android device is vulnerable to attack and Google's fix is imminent
- Microsoft's Copilot AI is coming to your Office apps - whether you like it or not
- How to track US election results on your iPhone, iPad or Apple Watch
When implementing AI, first train your managers
Dropping artificial intelligence into an organization requires more than a working knowledge of AI — this is only the first step. A recent survey shows most organizations and their IT departments — especially managers and executives who control the resources to move things forward — simply aren’t ready to handle AI yet. Plus, the skills, tools, and solutions needed aren’t in place yet.
Even IT department leaders don’t yet comprehend the implications of AI, according to a survey of 1,600 IT decision-makers released by SAS. Nine in 10 senior tech decision makers (93%) admit that they do not fully understand generative AI (GenAI) or its potential impact on business processes.
Also: What is a Chief AI Officer, and how do you become one?
Executives desperately need to be brought up to speed. Fewer than half (45%) of CIOs in the survey and just over a third (36%) of CTOs consider themselves “extremely familiar” with GenAI adoption in their organizations. Worse yet, only 13% of chief digital officers admit they are intimately familiar with AI.
It gets worse: Only 4% of the heads of IT or Information systems claim extreme familiarity with AI, along with only 2% of IT managers or directors.
Overall, only 7% are providing a high level of training on overall AI governance and monitoring, and another 15% are providing such assistance for generative AI. This is critical, as 75% of respondents are concerned about data privacy and security when GenAI is used in their organization.
This means it may take time, along with a lot of education and analysis, to overcome the issues that could derail AI implementations. For example, only 5% have a reliable system in place to measure bias and privacy risk in large language models. Another 42% are considering developing in-house capabilities for privacy risk detection, and 32% are considering developing in-house capabilities for bias detection.
Only 29% have continuous automated monitoring of their generative AI implementations. Only 25% conduct regular manual audits of their AI output.
Also: The best free AI courses in 2024 (and whether AI certificates are worth it)
“The ideal GenAI investment offers clear opportunities for efficiency and a better customer experience, but many organizations report gaps in strategic thinking that are affecting successful rollout,” the report’s co-authors state. “Our research shows that businesses are rushing into GenAI before establishing adequate systems of governance, which could result in serious issues with quality and compliance later.”
Integration of AI into existing processes and systems is also a source of problems. “Many companies struggle to integrate the technology with their existing tasks and tools,” the survey’s authors state. Plus, almost half (47%) of decision-makers report that they do not have appropriate tools to implement GenAI.
Here are the leading issues being experienced among organizations using AI:
- 48% report they are experiencing issues utilizing both public and proprietary datasets effectively.
- 45% report an absence of appropriate tools.
- 42% indicate they are experiencing challenges in transitioning Generative AI from a conceptual phase to practical use.
- 39% say they are having compatibility issues with current systems.
In-house AI expertise is also in critical demand, the survey shows. Half of organizations (51%) are concerned that they do not have the in-house skills to use the technology effectively. Around four in 10 respondents (39%) say they have found insufficient internal expertise to be an obstacle to implementing GenAI.
Also: Generative AI adoption will slow because of this one reason
The survey’s authors point out the following mandates associated with successful AI projects:
- AI integration: The need to “seamlessly integrate GenAI models into decisioning workflows, AI and machine learning applications, and existing business processes by using decisioning flow tools such as intelligent decisioning.”
- Data protection: “Ensure user privacy and security with robust data quality measures — including synthetic data generation, data minimization, anonymization, and encryption — that provide sensitive information safeguards.”
- Trustworthy and explainable results: “Data experts can apply natural language processing techniques to preprocess data, explain the generated output in easily understandable terms, minimize hallucinations, and reduce token costs.”
- Enhanced governance: “Use built-in workflows that validate the entire life cycle of LLMs, from regulatory compliance to model risk management.”
Predicting or calculating return on investment is another mandate that needs to be met. More than a third (36%) of IT decision makers foresee difficulty proving that GenAI offers a strong ROI or have found this hard to prove, the survey shows. Almost half (47%) are encountering challenges in transitioning from concept to practical use of GenAI. Four in 10 organizations (39%) do not have a GenAI usage policy in place.