- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
ChatGPT disruption: AI’s evolving vision renews need for trusted, governed data
Access to artificial intelligence (AI) and the drive for adoption by organizations is more prevalent now than it’s ever been, yet many companies are struggling with how to manage data and the overall process. As companies open this “pandora’s box” of new capabilities, they must be prepared to manage data inputs and outputs in secure ways or risk allowing their private data to be consumed in public AI models.
Through this evolution, it is critical that companies consider that ChatGPT is a public model built to grow and expand off use through advanced learning models. Private instances will be leveraged shortly where the model for answering prompted questions will arise solely from internal data selected – as such, it’s important that companies determine where public use cases will be appropriate (e.g., non-sensitive information) versus what mandates the need for private instances (e.g., company financial information and other data sets that are either internal and/or confidential).
All in . . . but what about the data?
The popularity of recently released AI platforms such as Open AI’s ChatGPT and Google Bard has led to a mad rush for AI use cases. Organizations are envisioning a future in this space where AI platforms will be able to consume company-specific data in a closed environment vs. using a global ecosystem as is common today. AI relies upon large sets of data fed into it to help create output but is limited by the quality of data that is consumed by the model. This was on display during the initial test releases of Google Bard, where it provided a factually inaccurate answer on the James Webb Space Telescope based on reference data it ingested. Often, individuals will want to drive toward the end goal first (implementing automation of data practices) without going through the necessary steps to discover, ingest, transform, sanitize, label, annotate, and join key data sets together. Without this important step, AI may produce inconsistent or inaccurate data that could put an organization in a risky gambit of leveraging insights that are not vetted.
Through data governance practices, such as accurately labeled metadata and trusted parameters for ownership, definitions, calculations, and use, organizations can ensure they are able to organize and maintain their data in a way that can be useable for AI initiatives. By understanding this challenge, many organizations are now focusing on how to appropriately curate their most useful data in a way that can be readily retrieved, interpreted, and utilized to support business operations.
Storing and retrieving governed data
Influential technology, like Natural Language Processing (NLP), allows for the retrieval of responses based on questions that are asked conversationally or a standard business request. This process parses a request into meaningful components and ensures that the right context is applied within a response. As technology evolves, this function will allow for a company’s specific lexicon to be accounted for and processed through an AI platform. One application of this may be related to defining company-specific attributes for particular phrases (e.g., How a ‘customer’ may be defined for an organization vs. the broader definition of a ‘customer’) to ensure that organizationally agreed nomenclature and meaning are applied through AI responses. For instance, an individual may be asked to “create a report that highlights the latest revenue by division for the past two years: that applies all the necessary business metadata that an analyst and management would expect.
Historically, this request requires individuals to convert the ask into a query that can be pulled from a standard database. AI and NLP technology is now capable of processing both the request and the underlying results, enabling data to be interpreted and applied to business needs. However, the main challenge is that many organizations do not have their data in a manner or form that is capable of being stored, retrieved, and utilized by AI – generally due to individuals taking non-standard approaches to obtaining data and making assumptions about how to use data sets.
Setting and defining key terms
A critical step for quality outputs is having data organized in a way that can be properly interpreted by an AI model. The first step in this process is to ensure the right technical and business metadata is in place. The following aspects of data should be recorded and available:
- Term definition
- Calculation criteria (as applicable)
- Lineage of the underlying data sources (upstream/downstream)
- Quality parameters
- Uses/affinity mentions within the business
- Ownership
The above criteria should be used as a starting point for how to enhance the fields and tables captured to enable proper business use and application. Accurate metadata is critical to ensure that private algorithms can be trained to emphasize the most important data sets with reliable and relevant information.
A metadata dictionary that has appropriate processes in place for updates to the data and verification practices will support the drive for consistent data usage and maintain a clean, usable data set for transformation initiatives.
Understanding the use case and application
Once the right information is recorded related to the foundation of the underlying data set, it is critical to understand how data is ultimately used and applied to a business need. Key considerations regarding the use case of data include documenting the sensitivity of information recorded (data classification), organizing and applying a category associated with a logical data domain structure to data sets (data labeling), applying boundaries associated with how data is shared, and stored (data retention), and ultimately defining protocols for destroying data that is no longer essential or where requests for the removal of data have been presented and are legally required (data deletion).
An understanding of the correct use and application of underlying data sets can allow for proper decision-making regarding other ways data can be used and what areas an organization may want to ensure they do not engage in based on strategic direction and legal and/or regulatory guidance. Furthermore, the storage and maintenance of business and technical metadata will allow AI platforms to customize the content and responses generated to ensure organizations receive both tailored question handling and relevant response parsing – this will ultimately allow for the utilization of company-specific language processing capabilities.
Prepare now for what’s coming next
It is now more critical than ever that the right parameters are placed around how and where data should be stored to ensure the right data sets are being retrieved by human users while allowing for growth and enablement of AI use cases going forward. The concept of AI model training relies on clean data which can be enforced through governance of the underlying data set. This further escalates the demand for appropriate data governance to ensure that valuable data sets can be leveraged.
This shift has greatly accelerated the need for data governance – which by some may have been seen as a ‘nice to have’ or even as an afterthought into a ‘must have’ capability allowing organizations to remain competitive and be seen as truly transformative in how they use data, their most valuable asset, both internally for operations and with their customers in an advanced data landscape. AI is putting the age-old adage of ‘garbage in, garbage out’ onto steroids, allowing any data defects flowing into the model to potentially be a portion of the output and further highlighting the importance of tying up your data governance controls.
Read the results of Protiviti’s Global Technology Executive Survey: Innovation vs. Technical Debt Tug of War
Connect with the Author
Will Shuman
Director, Technology Consulting