- This racecar-looking robot mower mows a gorgeous lawn and is on sale for Black Friday
- I tested the world's first thermal phone camera with a 50Hz refresh rate, and here are the results (get $75 off in this Black Friday deal)
- Get four Apple AirTags for just $73 with this Black Friday deal
- I tested Beats' new Pill speaker and it delivered gloriously smooth sound (and it's on sale for Black Friday)
- These Sony headphones are a fan favorite - and $150 off for Black Friday
Using genAI in IT operations boosts productivity, but security concerns linger
- Create documentation/procedures/knowledge base: 74%
- Operational data analysis: 69%
- Step-by-step guidance for tasks: 66%
- Research/learning: 64%
- Programming/scripting: 61%
- Configuration generation: 47%
Nearly three-quarters (73%) of respondents also said their organizations are using generative AI that IT vendors supply to:
- Query IT systems: 67%
- Recommend actions: 66%
- Query product documentation/validated designs: 60%
- Automate actions: 55%
Vendor-provided genAI can also cause concerns. For instance, when it doesn’t perform as expected, IT organizations don’t get the full value of genAI, and it raises cost concerns. Nearly one-fifth (18%) of those surveyed indicated cost as a challenge associated with genAI and IT management.
“Cost is a concern for those who feel their IT vendors’ AI tools are not delivering as promised. This suggests that if IT organizations invest in vendor-provided generative AI capabilities but find them to be of poor quality or not meeting their expectations, then the cost of those tools becomes a bigger issue and concern for them,” McGillicuddy said.
Despite the reported benefits, genAI still concerns IT organizations to some degree. Other areas that IT organizations found genAI challenging is in evaluating the quality of AI outputs. Some 63% of respondents said they are concerned about data quality and the ability to properly evaluate the accuracy and reliability of the content and insights generated by AI. Integrating AI tools with existing processes is also somewhat of a challenge for 30% of respondents.
Nearly 20% pointed to user acceptance as a challenge with implementing genAI. Specifically, getting IT personnel accustomed to relying on and trusting AI-generated content and recommendations can be challenging, according to EMA. When it comes to general-purpose tools, 63% of respondents are “at least somewhat concerned,” and 54% said they feel the same about using IT vendor tools. IT professionals have several concerns about security and compliance, and they are not fully unwarranted. For instance, AI tools could be exploited to extract sensitive data, so IT teams must remain vigilant, despite the reported genAI benefits. According to EMA, the top security and compliance concerns are:
- Data leakage via prompts: 52%
- Malicious code/filters AI generates: 44%
- Overconfidence in AI-driven security: 43%
- Compliance violations: 40%
- Bad changes that create vulnerabilities: 32%
“Users are struggling with evaluating AI solutions, especially the quality of content that comes out of them. And they’re worried about security and compliance risk; a majority of respondents see some risk there,” McGillicuddy said.