We are now more reliant than ever on digital systems to manage and control physical systems. We enjoy using systems like Ring to manage the front door and Nest to manage the HVAC system. Even my irrigation system has a digital interface and an app that makes turning on the sprinklers a breeze. Companies, of course, rely on digital system and rules to make decisions in increasingly impactful ways. When digital systems find anomalies, like fraud, the recovery process kicks in, automatically, often without a sanity or statistical check to examine if that anomaly is even believable or should be treated as actionable. The costs of such errors can be staggering.
As we soon become reliant on AI and Generative AI systems to conduct investigations and then take action against such cases, we are losing a powerful set of skills in incorporating “conforming or conditional information.” Indeed, an AI rarely stops to ask if the input information could be correct or if conditional information might make the input information unbelievable (for some valid reason).
Consider the case of Hertz falsely reporting hundreds of its customers to law enforcement for automobile theft, with many customers being arrested at gunpoint. Check out this disturbing link at NPR. In the public disclosures, it was suggested that Hertz had one digital record that said the car was missing, yet other internal systems that were charging the customer for rental agreement extensions, suggesting the cars were on valid rental extensions. It is easy to see how a rules engine or an AI processing the missing car data could conclude that a car is stolen, but in reality the car is still being used by Hertz or its customer. The error comes from not leveraging confirming or conditional information. The error has cost over 364 customers a great deal of suffering and pain and resulted in Hertz paying over $168 millions in damages to those customers. At one point, Hertz even had lawsuits of a half a billions dollars pending on these cases.
How can something so smart be so dumb?
Just consider the Hertz data. ” Of the company’s 25 million rental transactions, 0.014% are reported stolen each year, or about 3,500, the company has said.” That suggests about 10 customers a day were committing automobile theft. It turns out they had a huge false positive rate. Statisticians have long worried about false positives and rightfully so. When the penalty of being wrong is high, the risk of a false positive is unacceptable. In the case of Hertz, simply looking at another internal system would have provided valuable information on the location and status of the car. It is easy to also think that vehicles can be located by GPS systems and even tracked via cameras and physical record locators. All of these are examples of conditional or confirming information to explain the missing car record.
In a world that focuses an AI on looking for discrepancies, it is critical to ask if the input data can be believed. In statistics, we call this conditional probability assessment. Human minds are surprisingly good at assessing many complex conditional information impacts. For instance, we learn that travel on raining days takes more time. Such knowledge for predicting travel times is built over years of experience, of course. When a loved one is late on a rainy day, we don’t immediately assume they were abducted, but explain their late arrival with a longer travel time due to the rain. This is a complex analysis that looks at conditional information to explain the data with the most likely cause. Of course, with no information about the weather, a long delay in someone coming home might look they a missing person case or worse. The connection of the conditional information is critical. An AI or digital rules engine looks at a system and sees a missing car record at Hertz but might not consider many other conditional sources. Looking at conditional data might have found the cars and saved a lot of money. Looking at conditional data might also have been inexpensive.
So, as you build the next generation AI systems in your company, consider some key points:
Role of Conditional Information: Question if the AI explanation is reasonable with all conditional information available.
Be care with AI and One-way Door Decisions: Double check, triple check, if you are entering a “one-way door.” Calling the police to arrest a customer has a steep penalty function that is hard to undone. It is a one-way door. Is your AI created decision so definitive? Check everything (manually) when decisions are not easily reversible.
Examine what is Plausible, First: Ask what is most likely. When confronted with extreme results or anomalous data, don’t jump to extreme explanations, remember that average processes can still produce extreme results (and errors). What is most likely the cause? Investigate that first.
Build Feedback Loops: Install confirmation steps and feedback loops to investigate anomalous data. Imagine if Hertz had sent its customers messages asking to confirm their location and possession of the car or even used GPS signals to locate the cars. Well, that little step would have added much need information.
Take matters into your own hand. When renting car, take a picture with a time stamp and geophysical locator. Confirming that a car has been turned is now a critical customer step that requires conditional information.
About Russell Walker, Ph.D.
Professor Russell Walker helps companies develop strategies to manage risk and harness value through analytics and Big Data. He is Associate Teaching Professor at the Foster School of Business at the University of Washington. He has worked with many professional sports teams and leading marketing organizations through the Analytics Consulting Lab, an experiential class that he founded and leads at University of Washington, Foster School of Business.
His most recent and award-winning book, From Big Data to Big Profits: Success with Data and Analytics is published by Oxford University Press (2015), which explores how firms can best monetize Big Data through digital strategies. He is the author of the text Winning with Risk Management (World Scientific Publishing, 2013), which examines the principles and practice of risk management through business case studies.
#arrests, #hertz, artifical intelligence, Data, featured, Statistics
By Russell Walker, Ph.D.
–
Russell Walker helps companies develop strategies to manage risk and harness value through analytics and big data. He has done novel research in data monetization and digital disruption and advises leading firms on these topics.
As Director of Experiential Learning in Analytics and Associate Teaching Professor of Marketing and International Business at the Foster School of Business, at the University of Washington, Dr. Walker is an academic thought-leader on analytics. Russell Walker has developed and taught leading executive programs on Big Data and Analytics, Strategic Data-Driven Marketing, Enterprise Risk, Operational Risk, and Global Leadership. Previous to moving to Seattle and the Foster School, Dr. Walker was Clinical Professor at the Kellogg School of Management of Northwestern University, where he founded and taught many popular courses in analytics and risk management.
His is the author of the book From Big Data to Big Profits: Success with Data and Analytics (Oxford University Press, 2015) which examines data monetization strategies and the development of data-centric business models in the new digital economy. He is also the author of the award-winning text Winning with Risk Management (World Scientific Publishing, 2013), which examines the principles and practice of risk management as a competitive advantage.
Dr. Walker consults with firms on the topics of Big Data and Analytics, Data Monetization, Risk Management, and Business Strategy.
Russell Walker can be reached at:
[email protected]
@RussWalker1776
russellwalkerphd.com