- How to Become a Chief Information Officer: CIO Cheat Sheet
- 3 handy upgrades in MacOS 15.1 - especially if AI isn't your thing (like me)
- Your Android device is vulnerable to attack and Google's fix is imminent
- Microsoft's Copilot AI is coming to your Office apps - whether you like it or not
- How to track US election results on your iPhone, iPad or Apple Watch
What is Oracle’s generative AI strategy? – CIO
While Microsoft, AWS, Google Cloud, and IBM have already released their generative AI offerings, rival Oracle has so far been largely quiet about its own strategy. Instead of launching a competing offering in a rush, the company is quietly preparing a three-tier approach.
“Our tier strategy resembles a three-layer cake and each of these layers targets different enterprise customers depending on their needs,” said Karan Batta, vice president of Oracle Cloud Infrastructure (OCI).
The first tier, according to Batta, consists of its OCI Supercluster service and is targeted at enterprises, such as Cohere or Hugging Face, that are working on developing large language models to further support their customers.
OCI’s Supercluster includes OCI Compute Bare Metal, which provides an ultralow-latency remote direct access memory (RDMA) over a Converged Ethernet (RoCE) cluster for low-latency networking, and a choice of high-performance computing storage options.
The AI supercomputing service, according to Oracle, can support thousands of OCI Compute Bare Metal instances with tens of thousands of Nvidia A100 GPUs for processing massively parallel applications.
The service also comes with Nvidia’s foundation models, such as BioNeMo and Nvidia Picasso, along with AI training and governance frameworks.
Rival cloud service providers such as Microsoft and Google have also partnered with Nvidia to take advantage of its DGX Cloud — a service based on the technology that also powers OpenAI’s ChatGPT.
AWS, on the other hand, offers Amazon Elastic Compute Cloud (Amazon EC2) P5 instances, powered by NVIDIA H100 Tensor Core GPUs, for large language model training and developing generative AI applications.
New generative AI service is in the works
Oracle’s second tier targets enterprises — including its OCI customers — that want to develop generative AI capabilities based on their own data for their own consumption Batta said.
Although the service has not been formally named yet and most of it is still in the planning phase, the formal indication of the company planning such a service came in June, shortly after Oracle announced its investment in Canadian startup Cohere, which will provide foundational models as part of the new service.
However, the fundamental structure or concept of the planned generative AI service seems to be very similar to rival offerings from rival public cloud service providers — to offer a package of tools, including foundation models, prompt engineering tools and governance frameworks, for enterprises to train their data on.
In order to help enterprises develop their own generative AI applications or assistants, the new service will use connectors to tap enterprise data sources and create a knowledge graph to run the data through LLM embeddings for semantic understanding before bringing it into large language models to generate AI responses, Batta said.
“When an enterprise user queries something in natural language, the generative AI assistant or prompt runs a vector search and the results of the vector search are stored inside an enterprise server or location before making an API call to the large language model for generating responses,” Batta added, underlining the data privacy aspect of the planned service.
Although not confirmed yet, Batta said new foundation models for industry sectors such as health and public safety could be added to the service in the future.
Trailing other generative AI service offerings?
While AWS, Google Cloud, Microsoft, and IBM have laid out how their AI services are going to work, most of these services are currently in preview.
AWS offers foundation models via its generative AI-based service Amazon Bedrock, while Microsoft offers APIs for GPT models via its Azure OpenAI service.
IBM and Google Cloud, too, offer foundational models as part of their Watsonx and Vertex AI services, respectively.
IBM and Google Cloud also offer low code platforms in the form of the Tuning Studio and Generative AI Studio, respectively, in order to help enterprises fine-tune models.
In contrast, Oracle is yet to configure how it will help enterprises access data and model tuning tools as part of its planned service.
But, according to Batta, Oracle is leaning more towards “programmatic access” first for technical users, such as data engineers and scientists, instead of providing a low code or no code experience for non-technical users “right out of the gate.”
This means that Oracle’s generative AI service is more likely to feature a prompt tool inside a SQL query editor, Batta said, adding that the company might change around the elements of the service before launching it by the end of the year.
Oracle is also planning to extend the service to enterprises that have their data and applications in their own data centers.
Generative AI assistance across Fusion Cloud apps, NetSuite
For the third tier of its strategy, Oracle is also planning to add generative AI capabilities across its entire portfolio of Fusion Cloud applications and NetSuite applications by embedding Cohere’s foundational models into the SaaS offerings.
Metadata from these applications will be combined with foundational models to offer generative AI assistants inside these applications to increase employee productivity, Batta said, likening the planned assistant to Microsoft’s Clippy.
The metadata from these applications will help the language model identify trends and understand patterns across a particular SaaS offering, Batta added.
Last month, the company added similar generative AI capabilities to its Fusion Cloud Human Capital Management (HCM) suite. These capabilities include assisted authoring, suggestions, and summarization.
While assisted authoring will help HR managers write job descriptions and other HR-related content using a short prompt, the summarization feature is expected to help with tasks such as employee performance analysis, the company said.
The suggestions feature, on the other hand, is expected to provide recommendations across various tasks, such as providing survey questions.
In the SaaS applications market too, Oracle faces stiff competition from the likes of AWS, Salesforce, ServiceNow, and Microsoft.
Last week, AWS launched a new service, dubbed AppFabric, that aims to provide a unified generative AI experience across multiple SaaS applications.
Salesforce, meanwhile, previewed its bundled generative AI offering, dubbed AI Cloud, in June. Similarly, ServiceNow also enhanced its generative AI assistant last month.
Oracle is also expected to add generative AI capabilities to its databases portfolio, according to Batta.
Data warehouse and data lakehouse platforms such as Snowflake and Databricks have also introduced their own generative AI capabilities in the form of Snowpark Container Services and the Lakehouse AI toolbox.