- Try Apple's new Invites app for planning your next event - here's how it works
- The best Alexa smart speaker I've tested isn't an Echo (and it's 20% off)
- ChatGPT in WhatsApp just got an update that'll make you actually want to text it
- How Opera's new Air browser helped me to keep calm and surf on
- AI meets file data storage: How genAI may solve its own data growth crisis
AI meets file data storage: How genAI may solve its own data growth crisis
Ben Franklin famously said that there’s only two things certain in life — death and taxes — but were he a CIO, he likely would have added a third certainty: data growth.
File data is not immune. The general rule of thumb is that file data will double every two to three years, and that kind of exponential growth makes affordably storing, managing, and providing access to file data extremely challenging.
The problem grew even more acute for CIOs in November 2022, when OpenAI released ChatGPT. Suddenly, every board of directors charged their IT department with deploying generative AI (genAI) as quickly as possible. Unfortunately, genAI requires immense amounts of data for training, so making that ever-growing mass of file data accessible became an even more urgent priority.
Intelligent tiering
Tiering has long been a strategy CIOs have employed to gain some control over storage costs. Chris Selland, partner at TechCXO, succinctly explains how tiering works: “Implementing a tiered storage strategy, leveraging cloud object storage for less frequently accessed data while keeping hot data on high-performance systems, allows organizations to scale cost-effectively while maintaining quick access where it’s most needed.”
But, he says, there’s more to tiering than that for a modern enterprise. “Where possible, implement analytics platforms that can work directly with data in cloud data stores, eliminating the need to move large datasets, and implement data cataloging tools to help users quickly discover and access the data they need. In some cases, you may also need to implement edge computing and federated learning to help process data closer to the source, where data is either not practical or possible to centralize.”
Finally, Selland said, “invest in data governance and quality initiatives to ensure data is clean, well-organized, and properly tagged – which makes it much easier to find and utilize relevant data for analytics and AI applications.”
A tiered model provides the enterprise with advantages as IT moves to implement AI, said Tom Allen, founder of the AI Journal. “Hybrid cloud solutions allow less frequently accessed data to be stored cost-effectively while critical data remains on high-performance storage for immediate access. Using a retail or high-volume e-commerce company as an example, they can use aspects or adapt this strategy to accelerate its data processing for AI models. This will likely show improvements in real-time insights without compromising storage costs.”
Enabling automation with AI
Of course, implementing data tiering is much easier said than accomplished. With so much data already on hand – and much, much more of it being created every minute – manually tagging data for tiering is not feasible. Automation is the key, said Peter Nichol, data & analytics leader for North America at Nestlé Health Science.
“Companies use machine learning and automation to dynamically move data between data tiers (hot, cool, archive) based on usage patterns and business priorities,” Nichol said. “This technique optimizes storage costs while keeping high-value, frequently accessed data accessible.”
AI can also be applied to make it easier to access the data users are looking for, said Patrick Jean, chief product & technology officer at ABBYY. But it needs to be the right combination of different types of AI to ensure accuracy.
“Organizations’ data are growing exponentially, posing a challenge for decision makers that need quick access to the right insights for making smarter business decisions,” Jean explained. “They’re wanting to use AI to gain faster access to the documents that are fueling their business systems without risking hallucinations or sacrificing accuracy, which is of particular concern with generative AI only solutions. In a recent survey, decision makers say they put more trust in AI that is purpose-built for their organization, documents, and industry. This approach using the best combination of generative AI and symbolic AI delivers significant ROI that gets goods to market faster and improves operational efficiencies in accounts payable or transportation and logistics.”
The future of data storage and generative AI
But as AI has become more advanced, so have the possibilities for employing it to manage ad access rapidly growing file data volumes. “One approach companies are exploring,” Nichol said, “is AI-powered caching and pre-fetching. The technology works by caching frequently accessed data. AI models help predict which data will be needed next, and the AI engine pre-fetches that data. This reduces latency for workloads and analytics, improving the user’s perception of speed.”
Gene de Libero, principal at the marketing technology consultancy Digital Mindshare LLC, said that his firm has had great success reducing data retrieval times with AI. “Since leveraging AI to optimize data storage (specifically data compression and de-duping),” de Libero said, “we’ve improved operational efficiency by 25%. Now, things run much smoother. We manage data growth with a unified, scalable storage platform across on-premises and cloud environments, balancing performance and cost.”
And looking ahead, there’s promise for integrating large language models, small language models and retrieval augmented generation (RAG) with different tiers of storage to further reduce file data costs, increase the accuracy of genAI and improve retrieval performance.
“Enterprises are deploying private gen AI capabilities by integrating large language models (LLMs) with their proprietary data, including unstructured data in file systems,” said Isaac Sacolick, president of StarCIO and author of Digital Trailblazer. “Instead of files that end-users access occasionally as needed, data in file systems that are integrated with retrieval augmented generation (RAG) and small language models are now key to the accuracy of genAI responses and critical decision-making. Chief data officers and infrastructure leaders should review the performance and utilization of data across their file systems and seek faster all-flash solutions for frequently used file data, while more economical infrastructure NAS solutions may be a lower-cost option for long-term and less frequently accessed data with long retention requirements.”
So, as we move deeper into the 21st century, it appears that, as CIOs search for a way to efficiently store, manage and provide rapid access to file data — in part to lay the foundation for genAI — the solution will likely, itself, involve various types of AI, including genAI.
NetApp has long been a leader in providing intelligent data infrastructure solutions that combine unified data storage, integrated data services and CloudOps solutions. Learn more about how your organization can tackle the problem of exponential data growth for genAI.