So long, point-and-click: How generative AI will redefine the user interface


Constantine Johnny/Getty Images

What’s the difference between software and shelfware? Often, it’s the user interface and user experience (UI/UX) associated with the solution. Do people dread a particular UI/UX, or do they find it a positive encounter? For too long, too many applications, particularly in enterprises, have had terrible UI/UX. Mobile interfaces have improved things, but with artificial intelligence (AI) stealing the limelight, UI/UX is getting really interesting. 

With AI — particularly generative AI (Gen AI) and its natural language processing — accessing and integrating services and data may be a simple verbal prompt away. Plus, there are other ways in which AI will alter UI/UX, such as providing guidance for interface design up front, rapid prototyping, and facilitating real-time feedback on user preferences.

“I can see very soon you’re no longer logging into enterprise applications. It’s just a new type of interface,” Andy Joiner, CEO of Hyperscience, told ZDNET. “You just may ask it, ‘What’s been the trend of customer satisfaction in the last few days, and how often are incident reports?’ And it just generates a nice summary. You can just go and have a conversation.”

Also: Gemini Live is finally available. Here’s how you can access it (and why you’ll want to)

To this day, applications are point-and-click interfaces — which were revolutionary 35 years ago as they replaced “green-screen” text-based interfaces. Now, a new way to interact with machines is emerging. 

“One would have to drill down into the applications to produce summaries themselves,” Joiner continued. “I think enterprise software is going to change forever. We will no longer interact with screens, buttons, and paths. With generative AI, you will work with a new type of experience, providing a very different type of design as we go forward.”   

Excitement is brewing across the industry as this new way of interacting with systems emerges. “The UI/UX design phase has often been a Jira-filled, time-consuming process comprised of iterations, compromises, and, ultimately, a disconnect between the designer’s vision and the developer’s execution,” Jennifer Li and Yoko Li, partners with Andreessen Horowitz, noted in a recent post.

Also: The best AI for coding in 2024 (and what not to use)

“Anyone who’s touched Salesforce or NetSuite, for example, is familiar with the endless tabs and fields,” Li and Li pointed out. “And they’re crowding the screen more as the workflow becomes more complex. It’s not an exaggeration to say most enterprise software is a reflection of the underlying database schema, and each read and write requires its own screen asset.”

An interface made adaptive to user intentions through generative AI “could become [a] just-in-time composition of components through a simple prompt, or inferred from prior actions, rather than navigating through nested menus and fields,” Li and Li continued. “In a context-aware CRM, where the user prompts ‘input an opportunity for a lead,’ the UI could pre-select answers and redact unnecessary fields to make the workflow more streamlined.” 

Call it “GenAI-first UX,” a term coined by Marc Seefelder, co-founder and chief creative officer at Ming Labs, in a recent Medium post. “Imagine transitioning from rigid, linear user flows to flexible, intuitive experiences,” he stated. “This matters because it’s about making technology work for us, seamlessly blending with our human aspirations, and transforming digital experiences into something truly personalized and user-centric.”  

Even when designing more traditional point-and-click screens, AI can make a difference by boosting the quality of user experiences. “AI-powered analytics tools can detect patterns in user behavior and automatically flag problematic areas in the design, such as high navigation times, difficulty using specific buttons, or frequent error messages,” states a tutorial posted at ITmagination. “These insights enable designers to spot the inconsistencies and fix them promptly, ensuring a smoother user experience.”

Also: Welcome to the AI revolution: From horsepower to manpower to machine-power

Application developers and designers “spend a lot of time and energy filling in the gaps between what’s on the screen and what’s implemented in code,” Li and Li note. “The problem is exacerbated when the app has complex states and edge cases because it’s a huge undertaking for a designer to enumerate all the possibilities through screenshots. As a result, most of the issues are only caught during QA and testing and require backtracking in several stages to fix. But because GenAI technology is uniquely fit for quick prototyping and code completion, we believe it can bridge a lot of the gaps in this iteration process.”  

Looking at UI/UX possibilities from another perspective, interfaces may need to be designed to help frame prompts with GenAI. “Prompt controls can increase the discoverability of GenAI chatbots’ features, offer inspiration, and minimize manual user input,” Feifei Liu, international UX researcher with Nielsen Norman Group, said in a recent post.   





Source link