- The 65+ best Black Friday TV deals 2024: Early sales live now
- Why this $60 Android Auto wireless adapter is my favorite tech accessory this year
- Red Hat Enterprise Linux 9.5 gains security, networking upgrades
- SUSE unveils major rebranding, and a new AI platform that protects your data
- One of the most immersive speakers I've tested is not made by Sonos or JBL (and it's on sale)
How to Move from Real-Time Data to Real-Time Decisions
By Cori Land, Corporate Strategist, DataStax
In How to Measure Anything, Douglas Hubbard offers an alternative definition of “measurement” to the Oxford English Dictionary’s “the size, length, or amount of something.” Hubbard defines measurement as:
“A quantitatively expressed reduction of uncertainty based on one or more observations.”
This acknowledges that the purpose of measurement is to reduce uncertainty. And the purpose of reducing uncertainty is to make better decisions. Decisions are often made under some degree of uncertainty; 100% certainty is generally impossible, not necessary, and can be prohibitively expensive in terms of cost or time. So, it follows that there’s an optimal level of data where collecting further information won’t be worth the added cost. That optimal point is when additional data will not meaningfully change your decision.
I call this point data saturation. In academic research, data saturation indicates the point where new information won’t change the results of a study. We can borrow the term for business by applying it to corporate and operational decision-making. Data saturation helps us to remember that ultimately, it’s not about the data–it’s about the decision. Which customer should we lend to? What offer should we surface? Should we replace the motor?
Data creates the context for decision-making. As you approach data saturation, your decisions become more likely to win. If you have anything less than data saturation, your decisions are made with more uncertainty than need be. That could be acceptable if the risk of getting it wrong is acceptable. But if precision matters, you’ll need more context. There are two dimensions to data saturation: breadth and depth of coverage.
- Breadth of coverage: everything that can be measured is measured.
- Depth of coverage: measurements are taken in real-time.
Perhaps you run a distribution service. You install sensors that emit location data on your entire trucking fleet. You now have full breadth of coverage of your fleet (you could also measure oil levels, tire pressure, etc.), but how deep is that coverage? Trucks emitting location data in real-time provide you with deeper coverage than batch location reads because you have continuous intelligence about where your trucks are. This depth could provide benefits such as increased accuracy of your delivery predictions, optimized routes responsive to real-time events, or increased efficiency via reduced phone calls about where a driver is.
Recalling Hubbard’s point that the purpose of measurement is to make better decisions, your breadth and depth choices should be made based on the decisions you need to make. Full-breadth, real-time data saturation matters if you want to make well-calibrated, real-time decisions–and this is becoming an increasingly important capability.
Making fast, high-quality decisions is critical as companies race to increase the speed of their operations for value differentiation, like Popcorn delivery (whose tagline is “Faster than 911”) or MyBank’s one-second loan approval process. In addition to speed, it’s important that companies send the right products, or make the right credit decision, lickety-split. Getting there requires real-time data feeding from the sources that contextually matter.
Streaming decision automation
Feeling this need for speed, companies are investing in real-time data infrastructure, like data streaming and databases built for fast reads and writes. Yet a lot of data still ends up on a dashboard. Even with full data saturation, if a human is still sense-making before making a decision, you’re deciding much slower than the speed of your operations. This puts you at risk of falling behind competitors who can act on data as it comes in. How can you change your decision architecture to become more proactive rather than reactive? Streaming decision automation.
Streaming decision automation adds value to real-time data by combining it with other data streams, applying algorithms to the data stream, and feeding a decision engine capable of making decisions in real-time without a human in the loop. It is ML live in production activating real-time data into a customer or business outcome. IDC projects streaming decision automation to grow into a $2.1 billion market by 2025 at a CAGR of 39.5%. That’s fast.
Walmart’s “Customer Choice” feature is a great example of streaming decision automation and its benefits. The feature uses historical and real-time data to surface good substitutes for items that a customer is actively shopping for online yet are predicted to be out of stock. This real-time feature helped Walmart double its online sales in the second quarter of 2020—and is likely helping the retailer even more amid today’s supply chain problems.
Working toward decision saturation
Of course, you won’t infuse all of your decisions with AI-driven automation right away—time and resources are too scarce for that, and in some cases it might not be called for. Instead, you’ve got a portfolio of decisions to manage: processes that don’t need to be monitored or decided in real-time; processes that should be monitored in real-time but aren’t yet; processes monitored in real-time but with human decision makers; and processes monitored in real-time, feeding automated decisions.
There’s a spectrum of decision automation, too. The most basic automation uses “if-then” business logic. For example, a credit card company might have an automated decision rule like if the customer’s credit is over 750 and salary over $100K, then approve their credit limit increase request. More advanced decision automation relies on predictive analytics that can take many more factors into account, like age, payment history, and amount of savings in our credit card example.
As the model becomes more sophisticated, it might become more accurate but it will become slower to produce a decision. In some cases, that’s okay (maybe the credit card company sends credit limit increase offer emails on a rolling daily basis). But slow response times leave value on the table.
Consumers increasingly expect immediately responsive, seamless experiences that are personalized to their unique needs. That requires real-time decision automation. Extending our credit increase example, imagine a consumer is browsing flights to Portugal in their credit card’s travel booking portal. Any of the flights would put the shopper over their credit limit, so they don’t complete the transaction. The customer is frustrated, and the business loses revenue. Now imagine streaming decision automation in play: the company immediately sends a pop-up notice that the customer’s credit limit has been increased, enabling the customer to joyfully book their flights then and there.
This was possible because the streaming decision automation paired historical data (payment history, salary, for example) with real-time data (browsing behavior, calculated fare price, and account balance) to detect that the customer is credit-hungry and deem them credit-worthy. Streaming decision automation created a win-win for the customer and the business.
And that’s the frontier!
As you begin your journey into developing real-time data capabilities, you might work backwards by intentionally evaluating where you can create more value with real-time decisions–and then determine how to build the data feeds to get there. You’ll know you’re done when you have reached decision saturation–that optimal point where activating more decisions with real-time data won’t improve your corporate outcomes. By then, I’m sure we’ll have a new frontier to face.
Learn more about DataStax here.
About Cori Land:
Cori is a corporate strategist at DataStax. They combine their background in economic analysis, corporate innovation, and organizational design to help companies find a new way forward out of ambiguity and inertia. Cori specializes in surfacing empirical data and empathetic listening to help leaders find new confidence in their strategic decisions. Cori holds an MBA from the Haas School of Business at the University of California, Berkeley, and lives in San Francisco, California.