- The newest Echo Show 8 just hit its lowest price ever for Black Friday
- 기술 기업 노리는 북한의 가짜 IT 인력 캠페인··· 데이터 탈취도 주의해야
- 구글 클라우드, 구글 워크스페이스용 제미나이 사이드 패널에 한국어 지원 추가
- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
Generative AI dominates VMware Explore news
There were some cloud announcements this week at VMware Explore in Las Vegas, but AI was the star, as it has been at nearly every tech company lately. Vendors have been rushing to add generative AI to their platforms, and VMware is no exception.
The biggest AI features to emerge from the conference – VMware Private AI Foundation and Intelligent Assist – won’t be fully available for months. VMware Private AI Foundation is a joint development with Nvidia that will enable enterprises to customize models and run generative AI applications on their own infrastructure. Intelligent Assist is a family of generative AI-based solutions trained on VMware’s proprietary data to automate IT tasks.
One new tool – SafeCoder – is already available, however. VMware collaborated with Hugging Face to help launch SafeCoder, a commercial solution built around an open-source large language model that companies can fine-tune on their own code bases and run in their own secure environments.
VMware is using SafeCoder internally and publishing a reference architecture with code samples for customers that want to operate SafeCoder on VMware infrastructure.
“We deployed the SafeCoder model in our own data center months ago,” said Chris Wolf, vice president of VMware AI Labs. He spoke about SafeCoder on panels on Tuesday and Wednesday. “It’s a 15-billion-parameter model specifically tuned for software development.”
Then VMware performed additional training using some of its own code base. “We looked at who were the best software engineers in the company, and tuned the model on their commits,” Wolf said. Since this was a smaller, specialized model, the fine-tuning only took four hours, he said.
“The resulting solution that we offered to our software engineers has seen a 93% acceptance rate from our software developers,” he said. “That means they liked the solution, and they are continuing to use the solution because it makes them more productive.”
VMware also released a reference architecture for its own AI stack.
Software development is low-hanging fruit for generative AI, said VMware CEO Raghu Raghuram at an AI panel on Tuesday. “More companies are under developer constraints, and this can help them get more productive.”
Plus, coding AIs can be trained on a relatively homogeneous, constrained training data set, he added. “It’s a single data source in most companies.”
VMware itself isn’t going to get into the business of building generative AI models, however.
“We will be providing infrastructure,” Raghuram said. “We see tremendous opportunity in doing what we do best – providing a platform for companies to build and run their generative AI on, and these platforms can be on premise, in Azure, in AWS, and in other places.”
Tanzu, Workspace ONE and NSX+ set to gain Intelligent Assist
A few lucky customers will get early access to Intelligent Assist, VMware’s new generative AI-powered solution that is designed to simplify and automate all aspects of enterprise IT, including networking. Intelligent Assist is built on top of VMware Private AI.
VMware products with Intelligent Assist include VMware’s cloud application platform Tanzu, which will allow users to conversationally request and refine changes to their enterprise’s cloud infrastructure.
According to Mike Wookey, VMware’s vice president and CTO of cloud management, Tanzu Intelligent Assist will let users ask questions and take actions. It will also control the user interface of Tanzu products, he said, by driving the users to the correct views to analyze or take actions.
The generative AI here is from Azure AI services, which will see the prompt and the schema, but no private customer data, he said. “For the case of documentation, we use another private model that is trained on the Tanzu documentation and generate a specific summary,” Wookey said.
“This is all very bleeding edge on two fronts,” he added. “We are generating deterministic running code from natural language, verifying it, and using it to generate answers for the customers. We use a multi model approach, ensuring the customer’s data is never exposed to the public LLM, yet still leveraging the public LLM’s broad knowledge and accuracy.”
Intelligent Assist will be generally available “within the year,” a VMware spokesperson said.
Beyond Tanzu, Workspace ONE with Intelligent Assist will let users create scripts using natural language prompts.
Vmware’s core NSX+ networking platform is also getting a boost from Intelligent Assist. Now in tech preview, NSX+ with Intelligent Assist will allow security analysts to quickly and more accurately determine the relevance of security findings and remediate threats.
VMware also made some old-school AI announcements this week that use traditional machine learning approaches rather than generative AI.
For example, VMware announced AI integrations for its Anywhere Workplace platform that automatically optimize employee experience, drive new vulnerability management use cases, and simplify application lifecycle management.
VMware also teamed up with Domino Data Lab to provide a unified analytics, data science, and infrastructure platform that is optimized, validated, and supported, purpose-built for AI and ML deployments in the financial services industry.
VMware vSphere, vSAN 8 and Tanzu are now optimized with Intel’s AI software suite to take advantage of the new built-in AI accelerators on the latest 4th Gen Intel Xeon Scalable processors.
Responsible AI part of the conversation
Overall, there were more than two dozen sessions that focused on AI at the VMware Explore event, including a panel on responsible AI, focusing on the role that humans should play.
“Being reasonable about the boundaries of your technology is a good place to be,” said Meredith Broussard, a journalism professor at New York University, research director at the NYU Alliance for Public Interest Technology, and the author of several books about AI and bias.
She warned about the potential of biased results when AI models are trained on biased data. Unfortunately, it can be hard to figure out what data a model is trained on.
“For most of the models that are available today, the data sources are considered proprietary,” said VMware’s Wolf. “It’s hard to understand where the bias can come from if you can’t see the data. We had our engineers work with some of these models in the past, and in a very short time you can get the model to communicate some very unethical things.”
He said that VMware is looking to guide the industry toward lighter-weight, special-purpose models, which are easier to train, are more accurate – and even have a lower carbon footprint. “You can control the data and have full awareness of the data used to train the model,” Wolf said.
VMware rejected opportunities to use ChatGPT because of this and other concerns, he added.
“We were saying no to ChatGPT integrations because they could violate the privacy and compliance mandates that our customers had,” he said. “We want to understand the data sources that go into the model. Does the model have bias? We want to help customers understand the correctness of the AI as well as explainability of the AI results. We are being very mindful internally and externally on how we’re approaching AI.”
Copyright © 2023 IDG Communications, Inc.