What it’s going to take for advanced AI to reshape the enterprise landscape
According to Infosys research, data and artificial intelligence (AI) could generate $467 billion in incremental profits worldwide and become the cornerstone of enterprises gaining a competitive edge.
But while opportunities to use AI are very real – and ChatGPT’s democratisation is accelerating generative AI test-and-learn faster than QR code adoption during the Covid pandemic – the utopia of substantial business wins through autonomous AI is a fair way off. Getting there requires process and operational transformation, new levels of data governance and accountability, business and IT collaboration, and customer and stakeholder trust.
The reality is many organisations still struggle with the data and analytics foundations required to progress down an advanced AI path. Infosys research found 63 per cent of AI models function at basic capability only, are driven by humans, and often fall short on data verification, data practices and data strategies. It’s not surprising only one in four practitioners are highly satisfied with their data and AI tools so far.
This status quo can be partly explained by the fact eight in 10 only began their AI journey in the last four years. Just 15 per cent of organisations have achieved what’s described as an ‘evolved’ AI state, where systems can find causes, act on recommendations and refine their own performances without oversight.
Then there are the trust and accuracy considerations around AI utilisation to contend with. Gartner predictions forecast 85 per cent of all AI projects by 2022to wind up with an erroneous outcome through mistakes, errors, bias and things that go wrong. One in three companies, according to Infosys, are using data processes that increase the risk of bias in AI right now.
Ethical use of AI is therefore an increasingly important movement being led by government, industry groups and thought leaders as this disruptive technology advances. It’s for these important reasons the Australian Government deployed the AI Ethics Principles framework, which followed an AI ethics trial in 2019 supported by brands such as National Australia Bank and Telstra.
Yet even with all these potential inhibitors, it’s clear the appetite for AI is growing and spend is increasing with it.
So what can IT leaders and their teams do now to take AI out of the data science realm, and into practical business applications and innovation pipelines? What data governance, operational and ethical considerations must we factor in? And what human oversight is required?
It’s these questions technology and transformation leaders from finance, education and retail sectors explored during a panel session at the Infosys APAC Confluence event. Here’s what we discovered.
Operational efficiency is the no-brainer use case for AI
While panellists agreed use cases for AI could well be endless and societally positive, the ones gaining most favour right now orient to operational efficiency.
“We are seeing AI drive a lot deeper into the organisation around how we can revolutionise our business processes, change how we run our organisation, and all add that secret sauce from a data and analytics perspective to improve customer outcomes,” said ANZ Bank CIO for Institutional Banking and Markets, Peter Barrass.
An example is meeting legislative requirements to monitor communications traders generate in 23 countries. AI is successfully used to analyse, interpret and monitor for fraudulent activity at a global scale. Crunching and digitisation of documents, and chatbots are other examples.
Across retail and logistics sectors, nearly three in 10 retailers are actively adopting AI with strong business impact, said Infosys APAC regional head for Consumer, Retail and Logistics, Andal Alwan. While personalisation is often a headline item, AI is also increasing operational efficiencies and frictionless experiences across the end-to-end supply chain.
Cyber security is another favoured case for AI across multiple sectors, once again tying to risk mitigation and governance imperatives.
Advancing AI can’t be done without a policy and process rethink
But realising advanced AI isn’t only a technical or data actionability feat. It requires transformation at a systematic, operational and cultural level.
Just take the explosion of accessible AI to students from a learning perspective. With mass adoption comes the need for education institutions such as the Melbourne Archdiocese Catholic Schools (MACS) to actively build policies and positions around AI use. One consideration is how open accessibility of such tools can influence students. Another is protecting academic integrity.
Then it’s making sure leadership is very clear from an education system perspective to gain consistency across MACS’ 300 schools for how to utilise AI in learning. “We need to educate our teachers to be able to think about how their students will use AI and how they can maximise the learning for individual students, taking on-board some of these types of tools available,” MACS chief technology and transformation officer, Vicki Russell, said.
Elevating data governance and sharing is critical
Simultaneously, data governance and practices need refinement. Alwan outlined two dimensions to the data strategy debate: Intra-organisation; and inter-organisation.
“Intra-organisation is about how I govern the data: What data I collect, why I’m collecting it and how am I protecting and using it,” she explained. “Then there’s inter-organisation, or between retailers, producers and logistic firms, for instance. Collaboration and sharing of data is very important. Unless there is visibility end-to-end of the supply chain, a retailer isn’t going to know what’s available and when it’s going to be arriving. All of this requires huge amounts of data, which means we’re going to need AI for scaling and to predict trends too.”
A further area of data collaboration is between retailers and consumers, which Alwan referred to as “autonomous supply chains”. “It’s about understanding demand signals from the point of consumption, be it online or physical, then translating that in real time to organisation systems to get more security of planning and supply chain. That’s another area of AI maturity we’re seeing evolving.”
Infosys Knowledge Institute’s Data + AI Radar found organisations wanting to realise business outcomes from advanced AI must develop data practices that encourage sharing and position data as currency.
But even as the financial sector works to pursue data sharing through the Open Banking regime, Barrass reflected on the need to protect the information and privacy of customers and be deliberate about the value data has to both organisation and customer.
“In the world of data, you have to remember you have multiple stakeholders,” he commented. “The customer and person who owns the data and who the data is associated with is really the curator of that information, and should have right to where it’s shared and how it’s shared. Corporates like banks have a responsibility to customers to enable that. That needs to be wrapped up in your data strategy.”
Internally, utilising the wealth of education learning and data points MACS has been capturing is a critical foundation to using AI successfully.
“The data and knowledge a business has about itself before it enters into an AI space is really important in that maturity curve,” Russell said. “Having great data and knowing what you have to a certain extent before you jump into your AI body of work or activities is important. But I also think AI can help us leapfrog. There’s knowing enough but also being open to what you might discover along that journey.”
Building trust with customers around AI still needs human oversight
What’s clear is the onus is on organisations to structurally address trust and bias issues, especially as they lean towards allowing AI to generate outcomes for customers autonomously. Ethical use of data and trust in what and how information is used must come into play. As a result, parallel human oversight of what the machine is doing to ensure outcomes are accurate and ethical remains critical.
“Trust in the source of information and really clear ownership of that information is really important, so there’s clear accountability in the organisational structure for who is responsible for maintaining a piece of information driving customer decision outcomes,” said Barrass. “Then over time, as this matures, we potentially could have two sets of AI tools looking at the same problem sets and validating each other’s outcomes based on different data sets. So you at least get some validation of one set of information drivers.”
Transparency of AI outcomes is another critical element with customers if trust in AI is to evolve over time. This again comes back to stronger collaboration with data owners and stakeholders, an ability to detail the data points driving an AI-based outcome, and explaining why a customer got the result they did.
“It’s very important to be conscious of the bias and how you balance and provide vast sets of data that constantly work against the bias and correct it,” Alwan added. “That’s going to be key for the success of AI in the business world.”
We all need to work with ChatGPT, not against it
Even as we strive for responsible AI use, ChatGPT is accelerating generative AI adoption at an unprecedented rate. Test cases are being seen in everything from architectural design to writing poetry, creating law statement of claims and developing software code. Panellists agreed we’re only scratching the surface of use cases this generative AI can tackle.
In banking, it’s about experimenting in a controlled way and understanding the ‘why’ so generative AI is applied to achieve solid business outcome, Barrass said. In the world of retail and consumer-facing industries, conversational commerce is already front and centre and ChatGPT is set to accelerate this further, Alwan said.
For Russell, the most important thing is ensuring the future generation learns how to harness openly accessible AI tools and can prompt it appropriately to gather great information out of it, then reference it. In other words, education evolves and works with it.
It’s a good lesson for us all.