Microsoft on how custom AI offers your business better answers, lower costs, faster innovation


4045/Getty Images

Large language models like ChatGPT’s GPT-4o seem to have all the information in the known universe, or at least what engineers could scan off the internet.

But what if you want to use a large language model (LLM) with proprietary information from your own company data, or specialized information that’s not publicly available on the internet, or otherwise train an LLM to have specialized knowledge?

Do you build an LLM from scratch? Do you use a small, open-source, self-hosted model that contains only your information?

Also: A few secretive AI companies could crush free society, researchers warn

As it turns out, you can start with an LLM like GPT-4o, and then build up on top of that. That’s called a custom AI.

In this article, Eric Boyd, Microsoft corporate vice president for AI platforms, shares with ZDNET about how Microsoft makes custom AI possible for their customers, what goes into a custom model, what the whole process involves, and some best practices.

ericboydscreenshot-2025-05-05-130555

Eric Boyd, Microsoft corporate vice president for AI platforms.

Microsoft

Let’s get started.

ZDNET: Can you introduce yourself and provide an overview of your role at Microsoft and with its AI platform?

Eric Boyd: I lead the AI platform team at Microsoft. It has been a crazy couple of years in the AI space.

I started working at Microsoft in 2009 in the Bing organization, and it has been phenomenal seeing things evolve from there, because so much of Microsoft’s AI innovation started with Bing. We built the infrastructure to train AI models, to iterate and experiment to see which AI model was performing best. And all that infrastructure turned into pieces and components of things that we now serve through Azure AI Foundry.

Through Azure AI Foundry, we help companies access everything from thousands of GPUs to build and train their own AI models, to the tools needed to manage that, to a catalog of AI models, large and small, open and frontier, which we offer via our partnership with OpenAI and other providers.

We also provide tools to build applications on top of these AI models, including a wide range of capabilities our customers need to make sure they can do so responsibly.

Ultimately, my team is focused on building Azure AI Foundry so it includes everything a customer or developer might need to build their AI solutions, and easily move from idea to implementation in a secure and trusted way.

Generative AI vs. custom AI

ZDNET: So, last year we had generative AI. Now we have custom AI. What is it, and why isn’t generative AI enough?

EB: As companies have started to deploy applications, generative AI and the base foundation models have gotten them pretty far. But many are finding corner cases where the base foundation models don’t answer super well.

Also: The best AI chatbots: ChatGPT, Copilot, and notable alternatives

So custom AI is a company’s ability to use its own data to customize their core model to get better quality answers to questions — and in some cases they can use a lower cost model.

ZDNET: What are the key advantages of custom AI over off-the-shelf generative AI solutions?

EB: Quality and cost are the two primary advantages. With custom AI, you can improve the quality of your application’s answers by finding where the foundation model is weak and then fine-tuning the response. Fine-tuning also lets you, in some cases, use a lower-cost model to achieve higher-cost-model quality.

ZDNET: Can you share examples of how businesses have successfully implemented custom AI solutions?

EB: Microsoft is widely applying this technique across our tech stack, as we often act as our own “customer zero,” which has enabled us to experiment, learn, and hone cutting-edge best practices. GitHub Copilot and Nuance DAX were both extensively fine-tuned and customized with specialized coding output and healthcare knowledge. As the quality of the output increases, so does adoption.

DAX Copilot has now surpassed two million monthly physician-patient encounters, up 54% quarter-over-quarter, and is being used by top providers like Mass General Brigham, Michigan Medicine and Vanderbilt University Medical Center. By fine-tuning to this specific data, the solution does a better job producing a medical record as opposed to just summarizing a doctor-patient conversation.

Also: Want to win in the age of AI? You can either build it or build your business with it

We’re in a unique position with many AI applications across the suite of Microsoft products, and in building those, we’ve learned a lot about what people want to do next. By understanding how various techniques have helped our own applications, we have a solid vision for how this is going to help our customers’ applications.

ZDNET: What advice would you give to companies just beginning their AI customization journey?

EB: I generally encourage companies to prove their use case works using the most powerful foundation model possible, and then look at steps to either improve quality or reduce cost.

Customization would be a technique for both of those. For this, they’ll need to have used their application enough to know its potential weaknesses, where the model and data are not answering the questions as they want them to, and start collecting that data and building the repository for what they want the model to do. That’s eventually going to be the data we use to customize the model.

Also: Autonomous businesses will be powered by AI agents

In the era of AI, data is a changemaker as these systems require high-quality, accessible and secure data to function properly. Making sure they have that data is a key part of customizing the model. We are working to help customers modernize their data to the cloud, and unify their data estates to build the next generation of intelligent apps.

Optimize your AI investment

ZDNET: What are the cost implications of developing and maintaining custom AI solutions, and how can companies optimize their investments?

EB: The cost of fine-tuning the model is often relatively modest but an important investment as there are also costs for collecting the data and then training the model. Customers also need to consider the lifespan of the model.

When fine-tuning, we suggest starting with a foundational model (GPT-4o, or the like) to customize. When the next-generation model comes out, you can either choose “I’m going to keep my customized model” or “I am going to re-customize the next-generation model.”

Also: AI agent deployments will grow 327% during the next two years. Here’s what to do now

Keeping your data set will make that subsequent customization easier, but you would have to do it again. Although that is something to consider, don’t be concerned because the impact depends on the pace of model innovation.

We can’t say what the future holds for new model capabilities, but customers who fine-tuned GPT-4o a year ago would likely be happy with their solution today, despite advancements in reasoning models like the o1 series.

ZDNET: What are the most common hurdles organizations face when implementing custom AI, and how can they overcome them?

EB: To customize models, you need data that addresses where in your application you want improvement. Having general data on your model likely won’t get you to that next level. You need data where your application isn’t performing as you want, so you can determine how to improve it.

In the past, most companies have not been accustomed to doing this, so it’s a new muscle to build. Although there are tools and techniques to automate that, many companies don’t have the people who know how to, so they need to invest in developing those skills first and foremost, and then work on applying them

ZDNET: What ethical considerations should organizations keep in mind when deploying custom AI?

EB: I don’t think custom AI brings new ethical considerations. It’s the same set of things you must consider broadly with generative AI. It’s “Here’s this application I’ve developed. How am I going to make sure it behaves responsibly for my brand, for my applications, and for the potential implications of how this application will get used?”

Also: The best AI for coding (including two new top picks – and what not to use)

All the things that we cover in our Responsible AI Standard for how we think people should behave should still be considered. One of the benefits of using our platform to develop and deploy your AI applications is that Microsoft offers tools like Azure AI Content Safety that work with the custom models, so customers can be confident their systems are responsible by design.

Bias, fairness, and transparency

ZDNET: How does Microsoft address concerns around bias, fairness, and transparency in custom AI models?

EB: Today, we offer over 30 tools and 100 features to help our customers, developers, and researchers responsibly build with AI. Though Azure AI Content Safety is embedded by default in all models in the Azure AI Foundry catalog, preventing misuse and abuse at the model level alone is nearly impossible. That’s why it’s imperative to also have systems and tools that help you test and monitor every step of the way, before, during, and after deployment.

Microsoft aims to help customers through every layer of generative AI risk mitigation. We have tools to help users map, measure, mitigate, monitor, respond, and govern. We are looking at this from the system level, the user level, and the model level. We are continuing to invest in research on identifying, measuring, and mitigating different types of fairness-related harms, and we are innovating in new ways to proactively test our AI systems, as outlined in our Responsible AI Standard.

ZDNET: How does Microsoft Azure support businesses in tailoring AI models to their specific needs?

EB: We’ve been building systems into Azure AI Foundry to simplify this process. There’s the fine-tuning service itself, and observability services that make it easier to collect data on applications, which in turn can be used for fine-tuning.

ZDNET: What role does open-source AI play in the customization and scalability of AI solutions?

EB: We’ve seen a lot of innovation in the open-source model space, mostly at lower price points (and therefore lower quality points). But those lower-cost models are often good places to start because you can test and experiment to see if you can achieve the quality you’d get with a higher-priced model.

Also: I tested 10 AI content detectors – and these 5 correctly identified AI text every time

In general, the innovation in this space has brought a lot of model variety into the Azure AI Foundry model catalog, which customers can evaluate against, and choose the best model for their use case.

ZDNET: What are the key differences between fine-tuning existing AI models and building AI solutions from scratch?

EB: It’s massively expensive to build your own model from scratch, whereas fine-tuning is quite reasonable for most applications. Cost would be the primary difference. But if you’re just building a standard AI solution using a traditional foundation model (not a customized model), the primary difference is that you may sacrifice quality and/or price, the two main levers you’re optimizing for.

Agents are the apps of the AI era

ZDNET: What impact do you foresee AI copilots having on enterprise AI strategies?

EB: Large language models have changed how business gets done in enterprises, and we see that only continuing to accelerate. With our customers, we’re increasingly seeing them build applications that perform tasks for people and complete work, and get it done for them, as opposed to just answering a question.

Also: What are AI agents? How to access a team of personalized assistants

This is the shift toward AI agents being discussed. Agents are the apps of the AI era. Every line of business system today is going to get reimagined as an agent that sits on top of a copilot. That is going to transform large swaths of different business processes.

ZDNET: How should organizations balance AI automation with human oversight to ensure optimal outcomes?

EB: This is a key question. These models do many things, but not everything well. Ensuring we understand their capabilities and have people ultimately accountable for the work that gets done must be a key part of responsible AI policies, and a key part of how we recommend applications be built.

Also: Why scaling agentic AI is a marathon, not a sprint

The spirit of Microsoft’s AI tools is about advancing human agency, putting the human at the center, and being grounded in their context. We are creating platforms and tools that, rather than acting as a substitute for human effort, can help humans with cognitive work.

ZDNET: If you could offer one key takeaway to business leaders exploring custom AI, what would it be?

EB: As AI applications become a larger part of each business’s portfolio, they will miss out if they don’t think through their customization strategy to ensure the highest-quality, best-performing applications at the best price.

For companies wanting to get started today with custom AI, I say: Look at your generative AI application, target where in that application you want to improve, collect some data, and give it a shot.

ZDNET: How do you see the future of AI evolving beyond custom AI, and what’s the next major shift on the horizon?

EB: We’ve spent the past two years building applications that know how to use your data to help you answer a question and then give you a text answer back. I think we’re going to spend the next two years building applications that perform part of the work for you.

Also: You’ll soon manage a team of AI agents, says Microsoft’s Work Trend report

In this scenario, you can assign tasks and expect them to get done, sometimes autonomously via agents, versus in a synchronous chat conversation. But agents are just a large language model application that you can ask to do work and perform actions.

Within those applications, you will still find places where customized models will improve the quality of the system, even when the compute is happening behind the scenes.

Have you explored custom AI?

What about your organization? Have you explored enterprise-grade AI customization yet? What challenges or opportunities do you see in tailoring foundation models to your own data? Are you considering fine-tuning models like GPT-4o or working with open-source alternatives? What role do you think agents and copilots will play in your business strategy? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Want more stories about AI? Sign up for Innovation, our weekly newsletter.





Source link

Leave a Comment