- I tested the best AR and MR galsses: Here's how the Meta Ray-Bans stack up
- T-Mobile's data breach settlement payments finally rolling out this month - after April delay
- Cisco unveils prototype quantum networking chip
- What’s Next for MCP Security? | Docker
- Zero Trust: Begin your journey with these first steps
ServiceNow and Nvidia's new reasoning AI model raises the bar for enterprise AI agents

Many have dubbed this year “the year of AI agents,” as these AI systems that can carry out tasks for users are especially useful for optimizing enterprise workflows. At ServiceNow’s annual Knowledge 2025 conference, the company unveiled a new model in partnership with Nvidia to advance AI agents.
Apriel Nemotron 15B
On Tuesday, ServiceNow and Nvidia launched Apriel Nemotron 15B, a new, open-source reasoning language model (LLM) built to deliver lower latency, lower inference costs, and agentic AI. According to the release, the model was trained on Nvidia Nemo, the Nvidia Llama Nemotron Post-Training Dataset, and ServiceNow’s domain-specific data.
Also: Nvidia’s 70+ projects at ICLR show how raw chip power is central to AI’s acceleration
The biggest takeaway of the model is that it packages advanced reasoning capabilities in a smaller size. This makes the model cheaper and faster to run on Nvidia GPU infrastructure as an Nvidia NIM microservice while still delivering the enterprise-grade intelligence companies are looking for.
The company shares that Apriel Nemotron 15B shows promising results for its model category in benchmark testing, confirming that the model could be a good fit for supporting agentic AI workflows.
Also: Will synthetic data derail generative AI’s momentum or be the breakthrough we need?
Reasoning capabilities are especially important when using agentic AI because, in these automated experiences, AI performs tasks for the end user in various settings. Since it is performing tasks without human direction, it needs to do some processing or reasoning of its own to determine how to proceed best.
Joint data flywheel architecture
In addition to the model, the two companies also unveiled a joint data flywheel architecture — a feedback loop that collects data from interactions to further refine AI models. The architecture integrates ServiceNow Workflow Data Fabric and select Nvidia NeMo microservices, according to the release.
Also: Nvidia launches NeMo software tools to help enterprises build custom AI agents
This joint architecture allows companies to use enterprise workflow data to further refine their reasoning models while also having the necessary guardrails in place to protect customers, ensure the data is processed in a secure and timely manner, and give them the control they want. Ideally, this would feed into the creation of highly personalized, context-aware AI agents, according to the company.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.