AI/Machine Learning Scorecard: Correlation Pass | Causation Fail

Ladder of Causation_for blog

AI/Machine Learning at Level-1

To add insult to injury in the above graphic, Judea equates the current level of AI to be no different from an owl which can track the trajectory of a fleeing mouse. This is the first level where learning is based purely on association.
Above this first level lie intervention (level 2) and counterfactuals (level 3). Judea believes that we truly understand a situation only when we can handle counterfactuals, i.e. imagine alternate history/course of events. As functioning humans, we are used to counterfactual thinking. For example, what if I had pursued my singing career instead of becoming a data scientist; what if Hilary Clinton had won; what if…
Basically, we have the ability to conceptualize alternate futures based on the decisions taken, for we have a mental model of how decisions are converted into results. And this is the key point that Judea makes in his book that the current AI is primarily data hungry and is able to succeed commercially simply by riding on pure associations or relationships within data.
Here by association it means correlation,  and for generations statistics has been saying – correlation does not imply not causation. But it never went that path, and it did not evolve a grammar of causation, i.e. a mathematical notation for causation.

Chris Anderson shot to fame with his contentious article almost ten years to the date ”The End of Theory: The Data Deluge Makes the Scientific Method Obsolete”, where he argued that correlation is enough, and the scientific approach of model building is irrelevant in this data rich world. While the scientific community may have abhorred this stance, the commercial/data mining/AI world embraced this.

Judea also says “Some associations might have obvious causal interpretations; others may not. But statistics alone cannot tell which is the cause and which is the effect… Good predictions need not have good explanations. The owl can be a good hunter without understanding why the mouse always goes from point A to point B.” By this logic, if this is any consolation Google, Amazon and Facebook, can be good at what they do (which is manipulate us) without really knowing why,  for they never cared. They cared for likes/clicks/purchases, but not who we really are.

Are things this bad with development of AI?

Yes!
To quote from the book “We hear almost every day, it seems, about rapid advances in machine learning systems—self-driving cars, speech-recognition systems, and, especially in recent years, deep-learning algorithms (or deep neural networks). How could they still be only at level one?”
Judea goes on to say “I fully agree with Gary Marcus, a neuroscientist at New York University, who recently wrote in the New York Times that the field of artificial intelligence is “bursting with micro-discoveries”—the sort of things that make good press releases—but machines are still disappointingly far from human like cognition.

The reason he says so is that he believes that raw data can only map associations, it does not represent causal relationships, while humans are wired to view the world through the causal lens. As he says in his book “The world is not made up only of dry facts (what we might call data today); rather, these facts are glued together by an intricate web of cause-effect relationships… causal explanations, not dry facts, make up the bulk of our knowledge, and should be the cornerstone of machine intelligence.”
And therein lies the key insight for all of us, what we take for granted- the causal view of the world, can never be fully realized through the current approach of AI which believes that data trumps models.
The causal way of thinking is more efficient in spite of its shortcomings (read Daniel Kahneman’s book Thinking Fast and Slow) on the numerous cognitive biases that we inherit on account of this.
A child may drop a glass tumbler on to the floor and see it smash and conclude that glass is brittle and would break. How many such instances are needed before the machine learning algorithm marks it statistically significant and learns the same!  Well to answer this, I’ve paraphrased the Nobel Laureate Bob Dylan…

The AI-Song

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s