DevOps for AI Apps (Part 1) – Cloud Blog – VMware
Looking through various recent studies on the success of AI Apps for enterprises, It seems many businesses have developed AI Apps, but the apps only very rarely make it into production. Challenges seem to come from everywhere, from infrastructure and capacity, to skills scarcity, to the lack of automation or virtualization, where data scientists are having to request physical GPUs manually!
Over the last decade in the Cloud and Automation space, we’ve already been through a lot of this pain. Now is the opportunity to apply the many lessons we’ve learned, to make AI Apps even more accessible to enterprises.
Specifically, DevOps for AI Apps could be a great start.
Software is the key to innovation and technological evolution in today’s world. It’s no surprise then, that the software being written today is often leveraging artificial intelligence to take things to the next level. For the purpose of this article, we should consider an ‘AI App’ to be any application or IT service, which leverages machine learning algorithms to improve accuracy and efficiency.
If any of these terms are new to you, check out AI/ML Demystified to get up to speed.
AI apps are becoming commonplace, whether it’s Siri on your iPhone, Netflix deciding what you should watch, fraud detection from your bank or even supply chain forecasting from someone like Amazon who use the intelligence to better predict purchasing habits.
These are the next generation of applications and they are already all around us.
The idea of leveraging something like machine learning to level-up your next enterprise app project could be a good one for many reasons, but building one and getting it to market might not be quite as straightforward as its traditional counterpart might be.
When building “traditional” enterprise apps, there are many things to consider, such as the code for the app itself, the databases it connects to, the UI/UX, APIs, integrations with other systems, security and compliance, users, etc.
When building an app which also leverages something like machine learning to apply more accuracy/efficiency, we will have even more to consider. Let’s break it down into the data, the model and the app itself.
First, the data. The reason machine learning can be helpful is because it makes sense of vast amounts of data and applies intelligence to it. The data is what’s being used to train the machine learning algorithms. Typically we will want to leverage data from company data lakes or data hubs, or potentially data from various other distributed sources.
Then we need to think about the model itself, the intelligent piece. The model is where machine learning is applied to the data, in order to give us something useful for our app. Here, data scientists will use various different tools to run simulations leveraging different algorithms and approaches to figure out the best way for the model to interact with the data for the best results. This piece is described in more detail in AI/ML for Enterprise (Training & Inference).
Then we have the code, the application itself! In a more traditional based application, the code is key. In an AI app, it’s still just as important. The code determines how the end user interfaces with the artificial intelligence. It surfaces the results from the models applied to the data.
Over the last decade, we’ve seen modern startups and traditional enterprises leverage DevOps methodologies with dramatic effects. It’s seen in the industry as a major enabler for agility across software development and the I.T. operations space. Let’s be clear though, developing a single AI/ML based application for a single purpose doesn’t necessarily require you to adopt a DevOps way of working. You could do this on your laptop connected to an existing cloud service.
However, taking the perspective of an Enterprise, with a global footprint, legacy systems running alongside modern systems and probably more stringent security/compliance policies, things look a little different. As the cadence of change increases, it becomes even more important to use a methodology like DevOps for building AI apps, if we want to keep them relevant, stable and secure.
In DevOps, the goal is to improve the quality of the app and the speed of the code being released into production. Modern development processes are leveraged, the dev and ops teams are often combined, and the most successful approaches rely heavily on automation.
As we saw above, when building an AI app, we want to go beyond just the speed and quality of the application code, we also have the data and the algorithms to think about. This is where we need to bring in a data scientist persona. The data scientist will be responsible for building and running the algorithms, so will be looking for control over the data used, the models, and also the approach to training the AI and it’s inference.
Check back soon for Part 2, where we will talk about continual iteration, tool-chains and CI/CD, how VMware can help, and some tips on where to start!