- AI로 제조업체의 5가지 과제 해결··· 구글 클라우드, MDE 활용안 제시
- Tenable Empowers erex with Continuous Cybersecurity Protection, Eliminating Costly Outsourcing
- Patch Tuesday: Microsoft Fixes 134 Vulnerabilities, Including 1 Zero-Day
- NVIDIAでの新たな挑戦:25年のITキャリアを語る
- Cisco, Google Cloud offer enterprises new way to connect SD-WANs
From concept to reality: A practical guide to agentic AI deployment

Deployment: Automating the LLM operations lifecycle
Keep in mind that everything surrounding artificial intelligence and agentic AI is still evolving. We are seeing models being released faster, which introduces model management activities that we didn’t have to manage previously. Tooling is evolving and new frameworks are being released that make processes easier and more streamlined and that can reduce technical debt. You need to ensure your AI solution evolves as well. You will need to iterate your solutions more frequently than you would have with your traditional non-AI solutions. You also need to ensure that you have a versioning strategy to keep up with modifications and new features.
If you aren’t planning updates, with a versioning strategy, as well as updating the iterative tests, your AI system will become obsolete. This can cause unreliability and it becomes a technical debt that you will struggle to maintain.
The benefits of fully automating the LLM operations lifecycle to enhance efficiency, consistency and reliability, while also supporting continuous improvement, cost-effectiveness and compliance far outweigh the cost.
Agentic AI solutions have immense potential for businesses seeking to automate tasks, enhance efficiency and incorporate the benefits of agentic AI. But if you aren’t deploying, testing, monitoring and automating the process it doesn’t matter how good your solution is or what the potential could have been.
In this article, we have covered the processes around agentic AI DevOps but I want you to take away five things that you should ensure are your foundational baseline required as the basis for every successful implementation:
- Automate, automate, automate: Automate tasks, create automation pipelines, automate testing, automate evaluations, automate the deployment of monitoring.
- Deploy to containers and virtual environments: Run solutions in Docker containers to isolate the agents and constrain their access.
- Restrict access: Limit the agents’ access to resources, and to the internet, as well as data repositories to prevent unauthorized access or data oversharing.
- Monitor: Monitor output logs, performance logs and custom metrics during and after execution to identify issues that require human review. Create and compare against the baseline to identify and easily identify unintended behavior.
- Human oversight: Run tests with humans in the loop to supervise the agents and ensure that you have included all scenarios that will require human intervention.
Fully automating the LLM operations lifecycle will enhance efficiency, consistency and reliability, while also supporting continuous improvement, cost-effectiveness and compliance.
Stephen Kaufman serves as a chief architect in the Microsoft Customer Success Unit Office of the CTO focusing on AI and cloud computing. He brings more than 30 years of experience across some of the largest enterprise customers, helping them understand and utilize AI ranging from initial concepts to specific application architectures, design, development and delivery.
This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.