The CIO’s call to action on gen AI

Dr. Palmer: The challenge in front of our education system today is that it’s largely based upon a series of tests that are designed to measure things that aren’t necessarily going to create value in our next workforce. Those tests and the requirements for those tests go all the way up to the federal level from a policymaking perspective. Adjusting what we are testing and what we are measuring is going to be absolutely critical so that teachers can be successful, because today they are required to stay within a very specific teaching framework that is largely based on these tests that were designed to create a workforce of the past.

We get what we measure. And right now, we’re measuring the wrong things to prepare our workforce for changes that are actively happening. So to me, that’s the starting place: Let’s talk about what we’re going to measure and what success looks like and then adjust all of those things appropriately so that we’re lining up the entire system to create a future successful workforce. Because if we don’t change the measures, we can’t change the behaviors and, therefore, we can’t change the outcomes.

Ransley: I completely agree that outcomes, how we evaluate success, and what types of incentives we put in place need to be revisited. And, to build on that, we also need to make sure that we are clear on the fundamentals that we still need to continue teaching, regardless of what happens in the future with the jobs. That fundamental knowledge that future generations still need to master is going to accelerate and ground their learning. If we skip that step, the decision-making is not going to be as high quality or as viable. So we need to define the knowledge and skills that are non-negotiable, that every student needs to master before they move on to the more advanced skills to future-proof themselves.

Why do you think AI should fall under the CIO’s remit?

Ransley: As a CIO, and really any technology leader, we have the broadest perspective in the entire organization because not only can we see what is happening in every department across the enterprise, but we also have the ability to action it. That means that we are in a very unique position to see the opportunities that any of these technologies can present, just like we can also see the risks, because by having the cybersecurity mindset we already are naturally predisposed to thinking about the risks and the benefits of technology solutions.

CIOs also have tremendous experience with running projects and initiatives that are run from a project management perspective, where they can use the proper business case and can think of the risk mitigation. Typical IT organizations already have embedded processes in place to see how execution can be done even in experiments while actively managing the risks. Because there’s two ways this can go wrong: You can go too fast and not think about the risk because the AI implementation is done outside of the formal methodology — and formal doesn’t mean slow, necessarily; it just means that it’s deliberate. Or, you get slowed down and bogged down by the process altogether. Neither of those is the right path to take. 

As a whole, I think it fits beautifully within the CIO’s remit because of the breadth of perspective that the CIO has and the discipline in the organization that exists to make things run fast, but in a deliberate and careful fashion.

What are your thoughts on the creation of a new Chief AI Officer role?

Dr. Palmer: I think of artificial intelligence as another technology, so I’d ask this: Do we need a Chief Internet Officer role? Because I see the technology as playing that same type of role. It needs to be embedded in your overall technology strategy. It needs to be used as a tool by your business units. So I’m not seeing a huge need for this particular title. I think there are some specific creative industries where it makes sense, but on the whole, I’m not a fan.

Ransley: A lot of our existing technologies in IT are going to be AI-enabled, so AI will be embedded in all of the existing tool sets that we have. Name a vendor, and they probably have an AI strategy that has already been implemented or will be implemented in the short term. So splitting that out into a separate silo is probably not the best approach.

Dr. Palmer: To that point, what I know absolutely, not only from my career path but from my research as well, is that silos are the exact opposite of what we need as we move into this next evolution of the technology journey.

From a practical perspective, what does the generative AI playbook look like? What are some of the things every company should have in place?

Ransley: We talked about it a bit in the podcast, but to expand on that, they need to:

  • Establish a cross-functional governance review board to assess the impact of any generative AI use cases, whether as a standalone board or part of existing governance
  • Set up clear generative AI use policies on when it can or cannot be used and with which data
  • Have an AI literacy and upskilling program to incorporate generative AI education into InfoSec training or as a standalone, if needed
  • Sanitize any sensitive data before training any of the models
  • Incorporate generative AI into current risk assessment capabilities or implement new ones
  • Revisit generative AI risks and policies and standards continuously because things are continually changing
  • Implement a robust data governance process in the organization
  • Review any regulations that are evolving, because things are happening that could impact companies very quickly, so they need to make sure that they’re compliant as new regulations are coming in

These are the things every company needs to do now, because this is happening now.

Dr. Palmer: I want to reiterate her last point about the legal environment. The complexities in the United States are growing extensively. There were 177 pieces of state-level legislation on the books just in this particular legislative cycle. So for anybody that is creating a product or service in the United States, in order to comply, they’re going to have to know all of these different state-level laws. Then we’ve got the federal-level regulations, and then we have the laws on top of that. And then if you operate multi-nationally, you’ve got to deal with all of the laws from other countries.

There’s so much complexity in the legal and regulatory environment, and to Anna’s point, it is quite literally changing on a daily basis. The ability to continue to innovate within this complexity in such a way that you are not exposing your organization to risk — this is a challenging environment and it’s going to get more challenging over the short term and long term. So we need to make sure we have somebody that is really looking at that and staying abreast of it specific to their business needs.

Last up, words and messages matter when it comes to how people perceive things. Is there a better way to brand this new technology?

Ransley: In the beginning, I heard generative AI being called creative AI. It didn’t stick, but I thought that it very much described the essence of what it is — Gen AI shines as a way of sparking creativity where it may have been lacking before. I really enjoyed it being called that. Maybe we can have a bit of a resurgence.

Dr. Palmer: I love that creative AI term. From my perspective, the concept of humans plus artificial intelligence is the absolute foundation of the way we successfully move forward with AI. We can’t think about AI replacing humans. We’ve got to think about, how do we partner with AI to bring out the very best that machines can bring to the table and the very best that humans can bring to the table? I like to think about that as augmented intelligence. So it’s still AI, but in this case, it’s all about that partnership between humans and machines.

For more practical insights and advice on generative AI from Dr. Lisa Palmer and Anna Ransley, tune in to the Tech Whisperers podcast.



Source link