Generative AI: Cross the Stream Where it is Shallowest | McAfee Blog


The explosive growth of Generative AI has sparked many questions and considerations not just within tech circles, but in mainstream society in general. Both the advancement of the technology, and the easy access means that virtually anyone can leverage these tools, and much of 2023 was spent discovering new ways that Generative AI could be used to solve problems or better our lives.

However, in the rush to apply this transformative technology, we should also keep in mind “Maslow’s Hammer.” Attributed to Abraham Maslow, best known for outlining a hierarchy of needs, Maslow’s Hammer highlights an over-reliance on a single tool, a concept popularly summarized as “If all you have is a hammer, everything looks like a nail.” As corporations navigate the continuing evolution of AI, we need to be certain that we’re applying it where it makes the most sense, and not just because we can. This will ultimately save time, money, and energy that can be applied to building robust tools and solutions for viable use cases.

Recognizing when to use GenAI and when not to use it is a necessary skill set for full-stack domain-specific data scientists, engineers, and executives.

Running GenAI is expensive and not without tradeoffs. As of today, careless planning of a GenAI application can lead to a negative return on investment (due to the excessive operational cost), scalability and downtime issues (due to limited computing resources), and serious damage to the customer experience and brand reputation (due to the potential generation of improper content, hallucinations, mis/disinformation, misleading advice, etc.). Organizations struggle to control these variables in general, and the negative impacts and limitations must be offset by a huge value proposition.

One interesting aspect that can be observed across industries is the unexpected (but welcomed) side effects of going through the GenAI voyage, as some sort of eye-opening epiphany. How do we balance this risk/reward? What should we be looking at and what are the questions we should be asking to ensure that we’re successfully applying (or not) AI?

Breaking free from the complexity bias: as humans, we tend to favor and give credit to complex solutions only (known as ‘complexity bias’). Unfortunately, this particularly applies to GenAI applications nowadays, as we are influenced and “self-forced” to use GenAI to solve all problems. Just because “it seems to work”, it doesn’t mean it’s the best/optimal solution. It is by following this logic that some teams may have a significant chance of discovering that there are simpler (probably non-GenAI) means of solving some of these real-world problems (or parts of the problem!). Achieving this revelation requires a humble mind that is open to the possibility of considering that we don’t always need the most complex or expensive solution, even if it’s fancy and we can afford it.

It’s not always all or nothing: one aspect that works only for a few companies but not for most is the need to run GenAI all the time. If your business case is not around selling or supporting GenAI infrastructure, then you are likely using GenAI as a tool to accomplish domain-specific goals. If so, what every player in the industry would want is to maximize value while minimizing operational costs. At the current cost of running GenAI, the most obvious answer to achieve that is to avoid running it as much as possible, while still delivering most of the desired value. This delicate trade-off is a smart and elegant way of tackling the problem: not dismissing the value provided by GenAI nor obsessively using it up to the point that yields negative ROI. How do you achieve this? That’s likely the secret sauce of your domain-specific application area.

Ethical downsizing: GenAI models can be (and usually are) quite big. While this might be required for a few scenarios, it’s not necessary for most real-world domain-specific applications, as several GenAI authors are finding out across the industry (e.g., Phi-2). As such, it’s not only important for your business but also for humanity that we learn to downsize and optimize GenAI models as much as possible. It not only brings efficiency to your use case (cost saving, inference speed, lighter footprint, reduced risk, etc.) but also accomplishes a responsible use of the technology that is respectful of human resources. Each time you save a kilowatt or a few seconds of inference per user, you are explicitly contributing to a sustainable future where GenAI is leveraged to maximize value while minimizing environmental impact, and that’s something to be proud of.

Cross the stream where it is shallowest…

The key is to be humble enough to seek the optimal path: keep an open mind to consider non-GenAI solutions to your problems first. If GenAI is truly the best way to go, then find out if you really need to run it all the time or just sometimes. And finally, downsize as much as possible, not just because of cost and speed, but because of social responsibility.

GenAI is clearly having a moment with demonstrated potential. At the same time, being able to recognize the technical and financial downsides of GenAI is as important for the healthy development of the industry. In the same way, we don’t use the hammer for every task at home, we should continuously ask: Is this problem worth GenAI? And is the value provided by this technology (when applied to my domain-specific use case) going to exceed the operational shortcomings? It is with this mindset that the industry will make significant and responsible progress in solving problems with a diverse but efficient set of tools. Let’s continue exploring and building the fascinating world of GenAI, without forgetting what our ultimate goals are.

Introducing McAfee+

Identity theft protection and privacy for your digital life





Source link