- This new Lenovo laptop I tested proves work computers don't have to be boring
- Google's Pixel Watch is getting a life-saving feature that Apple Watch doesn't have
- Navigating the Future: Wi-Fi 7 as the Multi-Lane Highway of Connectivity
- Data Shows You’ll Encounter A Deepfake Today—Here’s How To Recognize It | McAfee Blog
- NordLayer set to release a new security-focused browser for the enterprise
Overcoming data compliance and security challenges in the age of AI

We are in the era of artificial intelligence (AI), and businesses are unlocking unprecedented opportunities for growth and efficiency. In IT service and operations (ServiceOps), AI agents are providing assistance for in-context insights, incident response, change risk prediction, and vulnerability management. AI technologies, like large language models (LLMs), require large and diverse datasets to train models, make predictions, and derive insights. However, the diversity and velocity of data utilized by AI pose significant challenges for data security and compliance.
Many AI models operate as “black boxes” and can be difficult for users to understand how their data is processed, stored, and compliant with policies. AI technologies may include multiple components and data sources, which can also lead to questions regarding data residency. Without proper data governance, transparency, and security, customer data, intellectual property, or other sensitive corporate information can be fed into LLM models, risking unintended data leakage.
Questions about AI models that CIOs and CISOs should be asking
CIOs and CISOs play pivotal roles in maximizing the benefits of generative AI and agentic AI while keeping applications, usage, and data secure. Staying abreast of the latest developments and approaches to data security and compliance is crucial for harnessing the benefits of AI and limiting risk. Selecting the right AI platform that includes AI agents requires thinking through various factors and the specific needs of your organization. The questions below cover seven of the most important aspects of this decision.
- How are access controls implemented? Look for solutions that honor role-based access controls and ensure sensitive information is only accessible to authorized users. Controls should include varying levels of permissions, strict adherence to least-privilege policies, and extensive safeguards against unauthorized access and data breaches.
- How is data encrypted? Look for solutions that encrypt data transmitted over the internet and use allowlists to restrict any unauthorized IP addresses or IP address ranges from accessing your AI applications.
- What are the data residency considerations? Ensure data remains stored within contracted regions in accordance with existing agreements and applicable commercial or federal regulations. This alignment with regional and sector-specific compliance requirements simplifies regulatory adherence for customers.
- What type of data is used to train AI models? Know what type of data is used to train AI models for specific use cases and ensure strict adherence to data privacy and compliance regulations.
- Do I retain ownership of my data? Ensure to retain full ownership of your data. Know the LLM provider’s data logging, retention policies, and configuration options.
- Do the AI models expose my data to third-party AI vendors? Ensure that your chosen LLM provider meets your organization’s data compliance requirements.
- How are AI models audited? Contact your chosen LLM or AI infrastructure provider for a data compliance assessment.
How BMC Helix satisfies top security concerns
BMC Helix customers retain full ownership of their data, ensuring that all incident tickets, knowledge articles, and files remain within their BMC Helix or third-party applications. This open-first approach enables organizations to use security and compliance mechanisms already in place, eliminating concerns about data copying, retention, or misuse by the LLM, which fosters trust and clarity in AI operations.
Data sources include tickets, incidents, observability data, knowledge articles, configuration data — across BMC Helix applications, with roles and permissions governing GenAI responses. For example, an IT support agent cannot access HR support tickets; a support agent and an administrator receive different answers to the same question based on their access credentials.
Additionally, BMC Helix customers have the option to configure whether internal knowledge articles can be used for their GenAI responses. The content in the customer’s third-party applications is indexed using an admin profile, which is available to end-users interacting with HelixGPT, BMC’s proprietary GPT model.
Other benefits and factors include:
- BMC Helix uses strong encryption for data in transit over the internet and for data at rest. Data in BMC Helix AI applications remain within the customer’s contracted regions. Organizations need to directly contact their chosen LLM provider for their data residency policy outside of BMC.
- BMC HelixGPT does not copy or store customer data in AI models. The data is used only for training purposes and adheres to string data privacy and compliance regulations under BMC’s policies. Furthermore, the data is isolated and logically segregated from other customer access or use.
- For service management use cases, BMC HelixGPT uses a stateless AI model to process each ITSM, employee navigation, service collaboration, or other requests independently. For IT operations management with AIOps use cases, BMC HelixGPT is trained using the customer’s incident data, resolution worklog, and more to assist the AI with categorizing incidents, identifying root causes, summarizing impacts, and assessing risks intelligently.
- BMC HelixGPT exposes customer data to third-party AI vendors. Therefore, IT organizations are responsible for ensuring their chosen LLM or AI infrastructure providers meet their data processing and retention requirements, as well as satisfying commercial and federal compliance requirements specific to their BMC HelixGPT use cases.
The bottom line
As AI continues to transform IT work, the importance of building trust and ensuring compliance is crucial. By responsibly managing data and prioritizing transparency and security, organizations can maximize the benefits of AI while reining in risk. In thinking about the approaches to overcoming some of security and compliance challenges, organizations can create a future where AI enhances work and multiplies human productivity.
Contact BMC if you would like to discuss this further.