“The thing about statistics is there’s not necessarily a right answer”: An interview with Philip Dawid

“The thing about statistics is there’s not necessarily a right answer”: An interview with Philip Dawid

Philip Dawid is Emeritus Professor of Statistics at the University of Cambridge. He has made fundamental contributions to both the philosophical underpinnings and the practical applications of Statistics. His theory of conditional independence is a keystone of modern statistical theory and methods, and he has demonstrated its usefulness in a host of applications, including computation in probabilistic expert systems, causal inference, and forensic identification.

His co-authored book Probabilistic Networks and Expert Systems won the first DeGroot Prize for a published book in Statistical Science, and he was awarded the Royal Statistical Society’s Guy Medal in Silver in 2001.

He also co-edited Causality: Statistical Perspectives and Applications and Simplicity, Complexity and Modelling with Wiley.

For many years, Philip Dawid was Professor of Probability and Statistics at University College London. He has served as Editor of Biometrika and of the Journal of the Royal Statistical Society (Series B), and as President of the International Society for Bayesian Analysis.

Alison Oliver talks to Professor Dawid about his career in statistics.

1. What was it that first introduced you in statistics and what inspired you to pursue the discipline as a career?

I started by studying mathematics at Cambridge, which I’d always enjoyed. There was very little statistics at that time in the mathematics undergraduate syllabus and I came through after three years not quite knowing what to do next but thinking that, if I wanted to go out and make some money, statistics might be a decent key to that.

I decided to take the Diploma in Mathematical Statistics at the University of Cambridge, and from a very early stage in that, I was hooked. I found it absolutely fascinating. I was particularly fascinated by the philosophical issues, by the possibility of different points of view. There were differences of opinion; there wasn’t a single way of doing it—something which the mathematicians certainly found rather hard to understand. In mathematics, usually there’s a right answer, and either you know how to find it, or you don’t. The thing about statistics is there’s not necessarily a right answer, and you may only think you know how to find it.

I was influenced very early on particularly by David McLaren who was then a junior lecturer and was given one of the standard courses to teach, which I believe was called “Practical Statistics”. Handed down for many years had been a set of simple problems involving simple data analyses with some toy numbers, teaching you how to do chi-squared and t-tests and things like this. It was called “practical” but really you just had to do calculations. David came to this from an entirely Bayesian point of view and gave some wonderful lectures in which he got us to think deeply about the problem and our opinions about it, to understand the context and assess our uncertainties, and to build a Bayesian model.

After only a very short time of exposure to what David had to say, I thought Bayesian methodology was clearly the only sensible way to do things. It wasn’t that I had a philosophical conversion or anything like that; he just showed very clearly that this was a good way to think about applied problems. Later, I started to think more about the underlying philosophy and mathematics, but mostly it was the way of thinking about applications that really got me hooked at that time.

2. Your research interests include logical foundations of Bayesian and other schools of probability and statistics, Bayes nets, statistical causality, probability forecasting, and forensic inference. What are you working on currently?

I’m still developing things I’ve done over many years. The areas I’m concentrating on are foundations of statistical causality (from a rather idiosyncratic point of view which hardly anybody else in the world agrees with) and issues related to the use of proper scoring rules and their underlying theory and applications.

3. You are an elected Fellow of the Royal Society, of IMS, ISI and ISBA, and have served as Vice-President of RSS and as President of ISBA. What are your memories when you look back on your Presidency of ISBA and what do you feel were your main achievements?

it was the year 2000, the year of the Millennial ISBA Conference. That was held in Crete— a wonderful conference that I remember as a high point. I don’t think it was a time of any great change, either within the ISBA world or the outside Bayesian world; it was more a time of consolidation.

4. You were Editor of the journal Biometrika and of the Journal of the Royal Statistical Society (Series B). What did you learn during your time as journal editor?

I learned what a difficult job being an editor was. Largely that’s because of the variety of different topics that people are working on that are all core statistics, but only a very few of which I felt I had any personal understanding or knowledge of. Everything is a collaborative effort. The main point is to have a good team of associate editors, and they should be in a position to enlist a good team of referees. It’s a hierarchical system; you’re in a position of making final decisions but you do rely on the expertise of a wide variety of other people.

5. How do you think the Bayes community has adapted over the years to meet the needs of its members?

It started out with twenty-five people. Those were the days before computers were available, except maybe a mainframe in a far corner of the building, but it wasn’t used very often. They were also the days when we couldn’t do much except philosophise. Not a lot of calculation, a little bit of maths, very little by way of anything other than extremely simple practical data analysis: normal distributions, Poisson distributions. So we thought quite hard about foundations and things which didn’t need computation. Of course, the world changed. Computers became more and more abundant and easy to use, and in turn made the expansion of the Bayes community possible – together we now had the shared ability to compute, to solve important practical problems which we couldn’t solve before, which has attracted these vast numbers of young people now. Unsurprisingly, that is what they spend a lot of their effort doing: working on computational systems. It’s absolutely wonderful. But I’m now an old fogey and I leave that to the juniors.

6. You delivered the de Finetti Lecture during ISBA 2018. What is the one thing that you wanted your audience to take away the lecture.

Probability does not exist! There is no such thing as objective probability. The only philosophically sound way of thinking about and modeling uncertainty is the Bayesian approach.

7. You have spent a significant time of your career at UCL and Cambridge, lecturing and doing research. What was it about these institutions that made you stay?

I spent most of my working life at University College London as my main cradle. Dennis Lindley was my mentor—a wonderful, wonderful guy. It was a very supportive environment. One of the big differences between UCL and Cambridge (where I had been as a student, and later went back as a member of staff and professor) was that in University College we had a completely freestanding department of statistics. It had been formed around 1910. We enjoyed very strong interactions with a lot of other departments in the college, except mathematics!—we had hardly any interaction with mathematics because we were doing statistics, we weren’t doing mathematics. When I went to Cambridge I became deeply embedded in a pure mathematics department, and it was a very different outlook. if I am to compare the two, I found the freestanding statistics environment was more supportive and in turn, effective, for me. But there are different ways of being successful and it’s different for everybody.

8. What do you think have been the most important recent developments in the field?

For me, what’s been the most important development recently is the resurgence of an emphasis on causality, and particularly the development of important new ways of thinking about and manipulating it. I’m very happy to see how causality has become an important enterprise across statistics and artificial intelligence and machine learning and a number of other fields—and of course with very important scientific applications.

9. Your research has been published in many journals and books (over 200 articles): is there a particular article or book that you are most proud of?

In 1979 I wrote a paper on conditional independence in statistics, which was read before the Royal Statistical Society and published in its Journal Series B. That was the first of many papers on the subject that I’ve continued to work on. A major strand of my whole academic life has been to think about the theory and applications of conditional independence.

10. What is the best book in statistics that you have ever read?

There are three that come to mind.

Right from my early student days, when I was studying for the Diploma, I was introduced to a classic book which I thought was like a Mozart symphony because it was so beautiful – William Feller’s Introduction to Probability Theory. It was just so elegant and lovely, and I thought that was the model for how a book should be written. Not that I particularly used the material in it or developed it, but I enjoyed it greatly.

I have to mention De Finetti’ s Theory of Probability as being an important one for every Bayesian to read.

And ever since I came across Judea Pearl’s latest book (with Dana Mackenzie) on causality, The Book of Why, I’ve been recommending that to everyone, because I think it’s really splendid.

11. What would you recommend to young people who want to start a career in statistics?

Statistics has expanded so much. It’s gone so far beyond what was traditionally thought of and done in statistics departments. In so many other areas—machine learning, artificial intelligence—many of the folks may not say or know they’re doing statistics, but really they are, so I think the point is not to be hidebound by terms and descriptions.

There are lots of ways you can be involved, in many different fields. I don’t mind if there are clever people working in machine learning departments doing good stuff with statistics who don’t think of themselves as statisticians—I don’t care if it’s good stuff. On the other hand, I think there are still some ways of thinking in which statisticians are the experts, and maybe the masters: understanding the importance of uncertainty, for example. But nowadays the boundaries are extremely fluid, and I’m hoping they’re going to get even more fluid. We’ve got to be still more interactive with other people.

My advice would be to go into statistics if that’s what you think is right for you. There are other areas where you could use your talents much the same. You’re certainly going to need a mathematical background, possibly deep mathematics, but there’s a lot you can do with quite simple mathematics as long as you have a good intuition. Don’t be afraid to be critical. Don’t take anything for granted. In particular, beware of bandwagons. There are enormous bandwagons trundling through the landscape, and they may not always be going in good directions, so be cautious.

12. Who are the people who have been influential in your career?

I mentioned David McLaren. He kept rather a low profile after Cambridge. He ended up as a professor in Glasgow, and I’ve always admired him enormously. He was a very original thinker. Dennis Lindley, both for what he taught me and what he did for me; in particular, for appointing me as a lecturer in his department, which was the beginnings of my career, and for which I’ll be eternally grateful to him. He was a lovely man.

 

Copyright: Image appears courtesy of Professor Dawid