Judea Pearl, recently annouced winner of the 2011 Association for Computing Machinery (ACM) A.M. Turing Award for Contributions that Transformed Artificial Intelligence, has been at the forefront of the development of computational intelligence over the last two decades.
Vint Cerf, the chair of the ACM 2012 Turing Centenary Celebration, remarked that Pearl's accomplishments "have provided the theoretical basis for progress in artificial intelligence and led to extraordinary achievements in machine learning, and they have redefined the term 'thinking machine."
Here, Judea talks exclusively about his inspirations and challenges, as well as his thoughts on the future of causality and computational intelligence. You can join the debate about the future of machine learning and react to Judea's thoughts at www.facebook.com/cambridgeuniversitypressphilosophy
Part One: Judea on his inspirations and breakthrough moments of his research
What inspired your initial interest and work in artificial intelligence? Was there any one scientist/philosopher/academic/writer who ignited your interest?
Not really, I don’t know of any person who would not be interested in understanding himself/herself, each with the best tools available to him/her. Religious preachers attempt this understanding through biblical narratives, Descartes tried to do it with mechanical analogies, psychologists with their crude model of the mind and we, computer scientists, with the most powerful symbol-processing mechanism ever available to mankind. And emulating is the key to understanding, because it provides us with the ability to take things apart and examine their behavior under the microscope of novel situations and novel configurations; this is what understanding is all about.
Like any other kid with interest in science I was fascinated by the lives of Archimedes, Galileo, Newton, Faraday and Einstein, not any one in particular. But their influence on me was perhaps especially profound because my science teachers, in Israel, had a unique ability to enliven these legendary figures and give us the illusion that we, too, took part in their discoveries.
What was the greatest challenge you have encountered in your research?
In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability; it requires a formal language of its own. I say that it was my "greatest challenge" partly because it took me ten years to make the transition and partly because I see how traumatic this transition is nowadays to colleagues who were trained in standard statistical tradition, including economists, psychologists and health scientists, and these are fields that crave for the transition to happen.
Can you describe a breakthrough moment that inspired the achievements of your work?
I described it in my book "Causality". The first idea arose in the summer of 1990, while I was working with Tom Verma on ``A Theory of Inferred Causation''. We played around with the possibility of replacing conditional probabilities with deterministic functions and, suddenly, everything began to fall into place: we finally had a mathematical object to which we could attribute familiar properties of physical mechanisms and causal relations, instead of those slippery probabilities with which we had been working so long in the study of Bayesian networks.
The second breakthrough came from Peter Spirtes's lecture at the International Congress of Philosophy of Science (Uppsala, Sweden, 1991). In one of his slides, Peter illustrated how a causal diagram should change when a variable is manipulated. To me, that slide of Spirtes's---when combined with the deterministic structural equations---was the key to unfolding the manipulative account of causation, then to its counterfactuals account,and then to most other developments I pursued in causal inference.
What is the part of your work and research that you enjoy the most?
Watching my intuition amplified under the microscope of mathematical models, and then seeing how I can do things today that I could not do yesterday.
Your book Causality: Models, Reasoning, and Inference, has been described by the Association for Computing Machinery as being "among the single most influential works in shaping the theory and practice of knowledge-based systems". What do you think was the most important achievement of your book?
To computer scientists it was perhaps the development of probabilistic graphical models which enabled knowledge-based systems to handle uncertainty coherently, and distributively, while circumventing the combinatorial explosion. This actually started with my earlier book "Probabilistic Reasoning". To empirical scientists it was the development of graphical causal models, because they permit investigators to articulate causal assumptions transparently, deduce their testable implications, combine them with data and answer difficult research questions that, previously, were at the mercy of folklore and guesswork. They even resolved the notorious Simpson's paradox and reduced it to an exercise in graph theory.
What is the one book you'd recommend reading? (can be fiction/non-fiction)
Daniel Kahneman's "Thinking, Fast and Slow," especially the Chapter: "Causes Trump Statistics".
Tweet your research/book in no more than 140 characters.
Causal inference is easy, once you overcome two mental barriers: statistical thinking and traditional probabilistic language. In all honesty, there is hardly a causal question that cannot be reduced today to mathematical analysis and machine implementation.
And finally, what is your Desert Island book/play/film/opera/piece of music?
I would take the Old Testament and Mozart's Requiem, to constantly remind me where I come from and where I am aiming.
Part Two: Judea on the future of causality and computational intelligence
Your work was instrumental in changing what it meant for computers to be intelligent. What do you think could be the next big developments in computational intelligence?
It is hard to tell. The field of AI is so diverse and its applications penetrate such a vast range of human activities, that it is impossible to tell whether vision, natural language, or planning is going to see the next big development. I would rather speculate on areas where I have more intimate knowledge and more direct influence.
Where do you see machine learning going in the future?
I have seen much progress in causal discovery, an area where I did some early work in the late 1980's and which has been developed significantly since, to the point where annual competitions are conducted for programs to discover the correct cause-effect relationships from a given stream of passive observations. It would be exciting if machines could discover that the rooster crow does not cause the sun to rise and, more ambitiously, that malaria is not caused by mal-air.
What areas do you think Bayesian networks, causality, and inference have most promise for in the future?
I see untapped opportunities in aggregating data from a huge number of disparate sources, say patients data from hospitals, and come up with coherent answers to queries about yet unseen environment or subpopulation. We have begun to look at this challenge through the theory of "transportability", but we need to go all the way from meta-analysis to meta-synthesis. Currently, meta analysis does little more than averaging apples and oranges to estimate properties of bananas. We need a principled methodology for analyzing differences and commonalities among studies, experimental as well as observational, and pooling relevant information so as to synthesize a combined estimator for a given research question in a given target subpopulation. Our team is currently working on the theoretical aspects of this challenge, and I am sure practitioners will be thrilled with the results.
In recent years, artificial intelligence has matured from logic-based to probabilistic-based and data-based methods. Do you think this trend is irreversible, or do you think that the latter will run their course and there will be a new era for logic?
Logic can play a major role in scaling up reasoning tasks, for example, in going from propositional to predicate logic and relational databases. But it is hard for me to envision how a purely logical system would cope with the uncertainty in the world and, more importantly, how it could learn through the gradual accumulation of (noisy) observations.
Are there any new hot topics or new debates in your subject area which we are yet to hear about?
The temperature of a topic is a function of the thermometer, which in our case is the skill set available to the investigator you ask. Speaking from my perspective, I see the issue of "free will" becoming a topic of lively discussion as robots acquire greater autonomy, and as they become more proficient in counterfactual and introspective reasoning, which in turns are necessry for learning from regret.
On the practical side, I believe that "Meta synthesis," as I described above, will become an arena for productive research.
Most current debates are inconsequential. There is a philosophical debate for example on whether counterfactuals are necessary, useful, or dangerous for causal inference. There is also an ideological-methodological debate on whether graphs are necessary, useful, or dangerous for causal inference. These I believe are passing debates that will fade away as soon as the cultural transition from statistical to causal thinking completes its course.
A more important debate rages on how scientists should think about science. In particular, should they ask: "How nature works?" or "How do we test what Nature does?" My mantra is: Think Nature, not experiment. I have seen too many good ideas stifled by thinking experiments rather than Nature. Had Newton worried about experiments, he would not have theorized that the tides are caused by the moon. The requirement of manipulability, even in theory, has led to some truly weird results in causal inference.
Are there any neglected/new areas in the field that you feel offer the potential for more attention/research?
The most neglected area I know is causal inference in statistics education and I have donated part of the Turing Prize money to the American Statistical Association to establish a prize for a person or a team who does most to introduce causal inference in education. What is missing is a 100 page booklet that would convince every statistics instructor that causation is easy (it is!) and that he/she too can teach it for fun and profit.
Do you have any recommendations for further reading and/or specialized libraries/collections?
It is always a pleasure to recommend my books and articles http://bayes.cs.ucla.edu/csl_papers.html but if you are aiming at a different source, all standard books in AI nowadays discuss Bayesian networks, graphical models and causal reasoning.
Professor Judea Pearl is the author of the book Causality: Models, Reasoning, and Inference, published by Cambridge University Press, which has been described by the Association for Computing Machinery as being "among the single most influential works in shaping the theory and practice of knowledge-based systems".
Journal of the American Statistical Association
"Pearl's career has been motivated by problems of artificial intelligence, but the implications of this book are much broader...This updated edition of a modern classic deserves a broad and attentive audience."
H. Van Dyke Parunak, reviews.com
We are offering 20% discount on Causality for a limited time only.
Click here to order today