top of page
  • Writer's pictureCarl Fransman

The problem with AI acceptance

Updated: Nov 1, 2020

Artificial Intelligence - mostly Machine Learning really - is omnipresent. Yet, when speaking with corporate leaders, adoption varies hugely. New tech adopted AI off the bat (Google, Facebook, Alibaba, etc.) as AI is an essential driver of their economic model.


Traditional industries however have been reluctant to accept AI. The plethora of startups in this field as well as traditional IT providers often get stuck in POC or pilot projects and while AI proves to be able to provide good results, it is often not operationalised. How come?

We first need to look at what constitutes a good result. AI performance is typically measured in Accuracy and in Precision (1). Accuracy determines how you hit the target, whereas precision determines how grouped the forecasts are. So when people speak of forecast bias, they refer to off-target but systematically skewed forecasts for instance (precise, but not accurate). Ideally, AI outcome should be both accurate and precise.


But the story doesn't end there: initially, AI provided some sort of black-box type answers. The outcome was, ex-post, verifiable and one was therefore able to determine how performant the AI was. Only, the performance was verified for that specific case and should, in theory, be re-verified after each run. Without the ability to understand how the machine came to a certain conclusion, the result was hard to trust. There, we've said it: TRUST.


The first step to building trust was "explainable AI" (2). In this case, next to the outcome (a forecast, an anomaly detection, etc.) one is able to dig into which features (input data and/or derivatives) led to the output. This allowed experts to check whether or not they'd trust that outcome.


While important, this doesn't yet solve the problem of true understanding; AI tends to look for correlations in the data. And correlations can be misleading; i.e. research has shown that (I believe it was in Texas) murder rates were up when ice cream sales rose. One understands there's no causal relationship (unless the ice cream was rigged with a "killer drug"), one can explain a correlation. Ice cream sales rise when it's very hot and the heat may shorten the fuse of hot-tempered criminals, which leads to an increase in crime. While explainable, many such correlations would be less obvious. Tyler Vigen has a great collection called Spurious Correlations (3) of which an example can be found here:

This is just one of many (funny) correlations Tyler Vigen compiled but one understands that in complex situations it would be hard to trust AI outcome without fully understanding the underlying mechanisms. There's really only one path that would solve this: AI based on causality. If AI models could be built to determine and then exploit causal relationships, one can't just make good predictions or detect anomalies but one can also create simulations. Understanding causality is core to how we function as humans, therefore Causal AI will eventually lead to AI acceptance. (4)


Carl Fransman is an MBA Solvay 2004 graduate.

 

Useful links:

  1. https://www.acondasystems.com/wp-siteone/index.php/2018/11/23/the-debate-about-forecast-accuracy/

  2. https://www.acondasystems.com/wp-siteone/index.php/2020/02/27/explainable-analytics/

  3. https://www.tylervigen.com/spurious-correlations

  4. Check out The Book of Why by Judea Pearl: http://bayes.cs.ucla.edu/WHY/

81 views

Recent Posts

See All

Sunday, December 3, 2023

The Solvay Times

bottom of page