- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
GenAI will Transform B2B Interactions and Solutions in the Year Ahead with New Depth of Context and Control
Human-like interaction with B2B solutions, bespoke multimodal LLMs for better accuracy and precision, curated workflow automation via LAMs and customized B2B applications will become the norm as GenAI expands in the business sphere.
With the rapid launch of new solutions powered by generative AI (GenAI), the business-to-business (B2B) landscape is being reshaped in front of our eyes. Many organizations have taken a cautious and meticulously planned approach to widespread adoption of artificial intelligence (AI), however the Cisco AI Readiness Index reveals just how much pressure they are now feeling.
Adverse business impacts are anticipated by 61% of organizations if they have not implemented an AI strategy within the next year. In some cases, the window may even be narrower as competitors pull away, leaving very little time to properly execute plans. The clock is ticking, and the call for AI integration – especially GenAI – is now louder than ever.
In her predictions of tech trends for the new year, Chief Strategy Officer and GM of Applications, Liz Centoni said GenAI-powered Natural Language Interfaces (NLIs) will become the norm for new products and services. “NLIs powered by GenAI will be expected for new products and more than half will have this by default by the end of 2024.”
NLIs allow users to interact with applications and systems using normal language and spoken commands as with AI assistants, for instance, to instigate functionality and dig for deeper understanding. This capability will become available across most business-to-consumer (B2C) applications and services in 2024, especially for question-and-answer (Q&A) type of interactions between a human and a “machine”. However, associated B2B workflows and dependencies will require additional context and control for GenAI solutions to effectively elevate the overall business.
The point-and-click approach enabled by graphic user interfaces (GUIs) effectively binds users to a limited set of capabilities, and a restricted view of data that is based on the GUI requirements set by the business at the point of design. Multi-modal prompt interfaces (mainly text and audio) are fast changing that paradigm and expanding the UI/UX potential and scope. In the coming year, we’ll see B2B organizations increasingly leverage NLIs and context to “ask” specific questions about available data, freeing them from traditional constraints and offering a faster path to insight for complex queries and interactions.
A good example of this is the contact center and its system support chatbots as a B2C interface. Their user experience will continue to be transformed by GenAI-enabled NLIs and multi-modal assistants in 2024, but the natural next step is to enrich GenAI with additional context, enabling it to augment B2B dependencies (like services) and back-end systems interactions, like application programming interfaces (APIs) to further boost accuracy and reach, minimize response time, and enhance user satisfaction.
Meanwhile, as the relevance of in-context faster paths to insights increases and the associated GenAI-enabled data flows become mainstream, large action models (LAMs) will start to be considered as a potential future step to automate some of enterprise workflows, most likely starting in the realm of IT, security, and auditing and compliance.
Additional B2B considerations with GenAI
As Centoni said, GenAI will be increasingly leveraged in B2B interactions with users demanding more contextualized, personalized, and integrated solutions. “GenAI will offer APIs, interfaces, and services to access, analyze, and visualize data and insights, becoming pervasive across areas such as project management, software quality and testing, compliance assessments, and recruitment efforts. As a result, observability for AI will grow.”
As the use of GenAI grows exponentially, this will simultaneously amplify the need for comprehensive and deeper observability. AI revolutionizes the way we analyze and process data, and observability too is fast evolving with it to offer an even more intelligent and automated approach from monitoring and triage across real-time dependencies up to troubleshooting of complex systems and the deployment of automated actions and responses.
Observability over modern applications and systems, including those that are powered by or leverage AI capabilities, will be increasingly augmented by GenAI for root-cause analysis, predictive analysis and, for example, to drill down on multi-cloud resource allocation and costs, as well as the performance and security of digital experiences.
Driven by growing demand for integrated solutions they can adapt to their specific needs, B2B providers are turning to GenAI to power services that boost productivity and accomplish tasks more efficiently than their current systems and implementations. Among these is the ability to access and analyze vast volumes of data to derive insights that can be used to develop new products, optimize dependencies, as well as design and refine the digital experiences supported by applications.
Starting in 2024, GenAI will be an integral part of business context, therefore observability will naturally need to extend to it, making the full stack observability scope a bit wider. Besides costs, GenAI-enabled B2B interactions will be particularly sensitive to both latency and jitter. This fact alone will drive significant growth in demand over the coming year for end-to-end observability – including the internet, as well as critical networks, empowering these B2B interactions to keep AI-powered applications running at peak performance.
On the other hand, as businesses recognize potential pitfalls and seek increased control and flexibility over their AI models training, data retention, and expendability processes, the demand for either bespoke or both domain-specific GenAI large language models (LLMs) will also increase significantly in 2024. As a result, organizations will pick up the pace of adapting GenAI LLM models to their specific requirements and contexts by leveraging private data and introducing up-to-date information via retrieval augmented generation (RAG), fine-tuning parameters, and scaling models appropriately.
Moving fast towards contextual understanding and reasoning
GenAI has already evolved from reliance on a single data modality to include training on text, images, video, audio, and other inputs simultaneously. Just as humans learn by taking in multiple types of data to create more complete understanding, the growing ability of GenAI to consume multiple modalities is another significant step towards greater contextual understanding.
These multi-modal capabilities are still in the early stages, although they are already being considered for business interactions. Multi-modality is also key to the future of LAMs – sometimes called AI agents – as they bring complex reasoning and provide multi-hop thinking and the ability to generate actionable outputs.
True multi-modality not only improves overall accuracy, but it also exponentially expands the possible use cases, including for B2B applications. Consider a customer sentiment model tied to a forecast trending application that can capture and interpret audio, text, and video for complete insight that includes context such as tone of voice and body language, instead of simply transcribing the audio. Recent advances allow RAG to handle both text and images. In a multi-modal setup, images can be retrieved from a vector database and passed through a large multimodal model (LMM) for generation. The RAG method thus enhances the efficiency of tasks as it can be fine-tuned, and its knowledge can be updated easily without requiring entire model retraining.
With RAG in the picture, consider now a model that identifies and analyzes commonalities and patterns in job interviews data by consuming resumes, job requisitions across the industry (from peers and competitors), online activities (from social media up to posted lectures in video) but then being augmented by also consuming the candidate-recruiter emails interactions as well the actual interview video calls. That example shows how both RAG and responsible AI will be in high demand during 2024.
In summary, in the year ahead we will begin to see a more robust emergence of specialized, domain-specific AI models. There will be a shift towards smaller, specialized LLMs that offer higher levels of accuracy, relevancy, precision, and efficiency for individual organizations and needs, along with niche domain understanding.
RAG and specialized LLMs and LMMs complement each other. RAG ensures accuracy and context, while smaller LLMs optimize efficiency and domain-specific performance. Still in the year ahead, LAM development and relevance will grow, focusing on the automation of user workflows while aiming to cover the “actions” aspect missing from LLMs.
The next frontier of GenAI will see evolutionary change and totally new aspects in B2B solutions. Reshaping business processes, user experience, observability, security, and automated actions, this new AI-driven era is shaping itself up as we speak and 2024 will be an inflection point in that process. Exciting times!
With AI as both catalyst and canvas for innovation, this is one of a series of blogs exploring Cisco EVP, Chief Strategy Officer, and GM of Applications Liz Centoni’s tech predictions for 2024. Her complete tech trend predictions can be found in The Year of AI Readiness, Adoption and Tech Integration ebook.
Catch the other blogs in the 2024 Tech Trends series
Share: