"Empirical Bayes has been the most riveting topic for me. It still seems like magic sometimes": An interview with Bradley Efron

Features

  • Author: Statistics Views
  • Date: 01 Jun 2015
  • Copyright: Image apppears courtesy of Professor Efron

Bradley Efron is Max H. Stein Professor of Humanities and Sciences and Professor of Statistics and Biostatistics at Stanford University. He is best known for creating the bootstrap resampling technique which has had an indelible impact on the field of statistics. Over the past decade or so, Efron has been studying statistical inference on massive data sets that have been compiled from research in biostatistics and genomics. He believes that the Empirical Bayes method is the key to interpret the results.

Efron studied Mathematics at the California Institute of Technology, followed by a PhD in statistics at Stanford, under the direction of Rupert Miller and Herb Solomon. His work has covered both theoretical and applied topics, including empirical Bayes analysis, applications of differential geometry to statistical inference, the analysis of survival data, and inference for microarray gene expression data. He is the author of The Jackknife, the Bootstrap and Other Resampling Plans and co-author of An Introduction to the Bootstrap. He has won numerous awards from the Wilks Memorial Award to the Guy Medal.

Statistics Views talks to Professor Efron about his career, his heroes, the best books in statistics, his love for the institution where he has remained for over 50 years and what he might have done instead of statistics.

thumbnail image:

1. When and how did you first become aware of statistics as a discipline and what led you to choose to study the subject and then pursue in the field?

My dad was a good athlete and a baseball scorekeeper so I had some attractive numbers around as a kid. There wasn't any statistics taught at Caltech, where I was a mathematics major, but maybe Dad's numerical interests resurfaced. In my senior year, one of the math professors let me do a reading course out of Cramer's classic text. By then I'd realized that I was no budding mathematician. I wanted to work on something less austere than pure math. Statistics fitted the bill.

2. What is your current research focussing on? What are your main objectives and what do you hope to achieve through the results?

In the last couple of years I've been interested in how Bayes and frequentist ideas work, or don't work, together.

There's a paper of mine, 'Frequentist accuracy of Bayesian estimates', that has just published in Journal of the Royal Statistical Society-Series B. But the main thing I've been working on is a book with my colleague Trevor Hastie, Computer-age statistical inference. Our idea, perhaps over-ambitious, is to trace how electronic computation has changed statistics, both in its methodology and theory, from the 1950s till the present. So for instance, we go from life tables to Kaplan-Meier curves to the log-rank test, and finally to proportional hazards.

3. You are best known for proposing the bootstrap resampling technique. What research led to this discovery? What set you on the right path?

The jackknife was a hot and mysterious topic when I was a graduate student. My advisor, Rupert Miller, wrote a paper called 'A trustworthy jackknife', showing examples of when the jackknife worked or failed. While we were both visiting Imperial College in 1972-1973, Rupert gave a characteristically clear lecture on the topic, after which Professor David Cox suggested to me that this was a promising area. The bootstrap started out as an attempt to put the jackknife on familiar statistical grounds, the first paper being titled 'Bootstrap methods: another look at the jackknife'. So my advice to young colleagues is to always hang around brilliant colleagues.

4. Do you continue to get research ideas from statistics and incorporate your ideas into your teaching? Where do you get inspiration for your research projects and books?

Sequoia Hall is filled with brilliant people so there is no shortage of good ideas floating around. Another source is the medical school, where I've always had a half-time appointment.

Statisticians are fortunate to work in a field where outsiders demand answers to difficult applied problems. This is not the case with pure math nor in most of the other sciences. It's hard to resist inflicting new (and difficult) problems on one's captive students.

5. You have received numerous awards from the MacArthur Fellowship to the National Medal of Science. Is there a particular award that you were most proud of receiving?

Well there's money, prestige, and the good opinion of one's colleagues. The last counts most heavily I think. Along that line, I really appreciated getting a Guy Medal from the Royal Statistical Society last year.

6. You have authored many publications and have also contributed to Bayesian analysis, applications of differential geometry, analysis of survival data, and inference for microarray gene expression data. What are the articles or books that you are most proud of?

I've always wanted to write a paper that is pure statistics and not math in statistics disguise. Hard to do, but the first bootstrap paper came close. The old Biometrika paper on biased coin designs, 'Forcing a sequential experiment to be balanced' almost makes it too.

...Statisticians are fortunate to work in a field where outsiders demand answers to difficult applied problems.

7. What has been the best book on statistics that you have ever read?

The most influential one was Cramer's Mathematical Methods of Statistics, written under partial house arrest during WWII, read under statistically isolated conditions at Caltech. My battered copy of Moran and Kendall's little monograph on Geometrical Probability speaks to a rabid reading.

8. What has been the most exciting or demanding development that you have worked on in statistics during your career?

Ever since my first exposure in the 1970's, via Robbins and Stein, empirical Bayes has been the most riveting topic for me. It still seems like magic sometimes.

9. What do you think the most important recent developments in the field have been and will be in the future?

If recent means the last 20 years then I'd list false discovery rates, a la Benjamini and Hochberg, and Tibshirani's lasso. From the point of view of usage, one has to count MCMC and objective Bayes inference. The book with Hastie tries to assess the validity of "uninformative Bayes" methods, which aren't firmly anchored in either Bayes or frequentist philosophy.

10. Do you think over the years too much research has focussed on less important areas of statistics? Should the gap between research and applications get reduced? How so and by whom?

Statistics is better balanced now than in my PhD years, when mathematical decision theory ruled the roost. Statisticians, like all scientists, tend to overuse methods that are technically feasible. At opposite ends of the spectrum, it looks like too much asymptotics and too many unmotivated Bayesian analyses to me, but "too much" doesn't mean a waste of time, just personal bias.

11. Do you have any advice for students considering a university degree in statistics?

Yes, do so. There couldn't be a more promising field of endeavor than statistics in 2015, a combination of logic, philosophy, mathematics, computation, and science!

12. You continue to teach at Stanford University. Over the years, how did the teaching of statistics evolve and adapt to meet the changing needs of students? What do you think the future of teaching statistics will be?

The computer has changed teaching for the better. Most of our courses now involve some component of R experience. Traditional stat courses start out with normal distributions and the t-test.

These are deep and difficult concepts. The computer can ease the way into basic statistical ideas like comparison, accuracy, correlation, regression, and hypothesis testing. At the more advanced level, we can now bring real data sets into the classroom.

The hardest thing to teach about statistics is why one uses method A rather than method B. Seeing the methods play out in practice is our best spur to useful theory.

13. What is it that you love about Stanford as you have remained there since taking your PhD?

I once ran a survey of the Stanford faculty asking why they stayed at the place. The answer seemed to be the absence of bad features. No bad weather, no big hills to pedal over, not too much interference from the administration... In fact, the higher Stanford administration has always been supportive of statistics, long before this was fashionable (is it fashionable?) and their foresight has built a first-rate department. The real reason I've stayed here, besides slug-like indolence, is the quality of colleagues, both within statistics and in the surrounding science departments.

14. What do you see as the greatest challenges facing the profession of statisticians in the coming years?

The challenge is always the same: to develop truly useful new ideas for the understanding of real world problems. The book with Hastie (yes, I'm pushing it) reviews a dozen first-rate advances since the 1950s, empirical Bayes, James-Stein, jackknife, bootstrap, GLMs, Kaplan-Meier, proportional hazards, MCMC, etc, and we need another dozen by 2050.

15. Are there people or events that have been influential in your career?

The pre-war giants, Fisher in particular, Robbins, Rao, Stein, Tukey, and Cox, and my Stanford colleagues, a few of whom I've mentioned already.

The hardest thing to teach about statistics is why one uses method A rather than method B. Seeing the methods play out in practice is our best spur to useful theory.

16. Whose work do you admire (it can be someone working now, or someone whose work you admired greatly earlier on in your career?).

Well one really can't do better than Fisher. Gustave Elving once said to me "After I met Doob I couldn't understand why anyone else did probability, and after I met Stein I couldn't understand why anyone else did statistics." I never met Doob but Stein truly is a scientist to admire.

17. If you had not got involved in the field of statistics, what do you think you would have done? (Is there another field that you could have seen yourself making an impact on?)

Besides not having any statistics at Caltech, they didn't have any computer science. That was lucky, I might have gotten trapped into it. Given a pre-birth chance to adjust my talent level, writing, particularly humor writing, sounds appealing.

Related Topics

Related Publications

Related Content

Site Footer

Address:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.