- I traveled with a solar panel that's lighter than a MacBook, and it's my new backpack essential (and now get 23% off for Black Friday)
- Windows 11 24H2 hit by a brand new bug, but there's a workaround
- This Samsung OLED spoiled every other TV for me, and it's $1,400 off for Black Friday
- NetBox Labs launches tools to combat network configuration drift
- Navigating the Complexities of AI in Content Creation and Cybersecurity
What is your data strategy for an AI future?
As enterprises become more data-driven, the old computing adage garbage in, garbage out (GIGO) has never been truer. The application of AI to many business processes will only accelerate the need to ensure the veracity and timeliness of the data used, whether generated internally or sourced externally.
The costs of bad data
Gartner has estimated that organizations lose an average of $12.9m a year from using poor quality data. And IBM calculate that bad data is costing the US economy more than $3 trillion a year. Most of these costs relate to the work carried out within enterprises checking and correcting data as it moves through and across departments. IBM believes that half of knowledge workers’ time is wasted on these activities.
Apart from these internal costs, there’s the greater problem of reputational damage among customers, regulators, and suppliers from organizations acting improperly based on bad or misleading data. Sports Illustrated and its CEO found this out recently when it was revealed the magazine published articles written by fake authors with AI-generated images. While the CEO lost his job, the parent company, Arena Group, lost 20% of its market value. There’ve also been several high-profile cases of legal firms getting into hot water by submitting fake, AI-generated cases as evidence of precedence in legal disputes.
The AI black box
Although costly, checking and correcting the data used in corporate decision making and business operations has become an established practice for most enterprises. However, understanding what’s going on with some large language models (LLMs) in terms of how they’ve been trained, and on what data and whether the outputs can be trusted, is another matter considering the increasing rate of hallucinations. In Australia, for instance, an elected regional mayor has threatened to sue OpenAI over a false claim made by the company’s ChatGPT that he had served prison time for bribery whereas, in fact, he had been a whistleblower on criminal activity.
Training an LLM on trusted data and adopting approaches such as iterative querying, retrieval-augmented generation, or reasoning are good ways to significantly lessen the dangers of hallucinations, but can’t guarantee they won’t occur.
Training on synthetic data
As companies seek a competitive advantage through deploying AI systems, the rewards may go to those with access to sufficient and relevant proprietary data to train their models. But what about most enterprises without access to such data? Researchers have predicted that high-quality text data used for training LLM models will run out before 2026 if current trends continue.