Professor Robert Kass has been been on the faculty of the Department of Statistics at Carnegie Mellon since 1981. He joined the Center for the Neural Basis of Cognition (CNBC, run jointly by CMU and the University of Pittsburgh) in 1997, and the Machine Learning Department (in the School of Computer Science) in 2007. He served as Department Head of Statistics from 1995 to 2004 and was appointed Interim Co-Director of the CNBC (CMU-side director) in 2015. He became the Maurice Falk Professor of Statistics and Computational Neuroscience in 2016.
His subsequent research has been in Bayesian inference and, beginning in 2000, in the application of statistics to neuroscience. Professor Kass is known not only for his methodological contributions, but also for several major review articles, including one with Adrian Raftery on Bayes factors (Journal of American Statistical Association, 1995) one with Larry Wasserman on prior distributions (Journal of American Statistical Association, 1996), and a pair with Emery Brown on statistics in neuroscience (Nature Neuroscience, 2004, also with Partha Mitra; Journal of Neurophysiology, 2005, also with Valerie Ventura). His book Analysis of Neural Data, with Emery Brown and Uri Eden, was published in 2014.
1. You gained your PhD in Statistics at the University of Chicago. What was it that first introduced you to statistics and what then inspired you to pursue the subject area?
Well, I had several experiences working in labs, both in high school and in college and I liked that way of thinking. Then, at some point, it became apparent to me that there were people in the lab who needed to do elementary statistics, but they were really stuck with it and didn’t know what to do. So, I read a basic statistics book and, because I was a math person, I had an easy time understanding it. I explained it to them and they were so grateful that I thought, “Oh, well, maybe I should do this all the time.”
As I got into it and learned more about it—really it was mostly in graduate school that I learned what the subject was—I started to think of it as quantitative epistemology. I’d been struggling with my love of math and my interest in science, and I didn’t really see how to put them together, but I began to learn that what I really liked was when the math was about something. Eventually, I built up an appreciation for the aesthetics of statistics.
2. You are the Maurice Falk Professor of Statistics of Statistics and Computational Neuroscience at Carnegie Mellon and have been part of their Department of Statistics since 1981. What is it that you love about Carnegie Mellon that has kept you there?
When I came out of graduate school, I was interested in being in a place where I could think more about Bayesian statistics, and there were only two institutions in the United States that were considered hospitable to Bayes: one was Carnegie Mellon and the other one was University of Wisconsin. One factor in my choice was that when I visited University of Wisconsin, Dennis Lindley was there on sabbatical, and he advised me that I should go to Carnegie Mellon.
What kept me here has been that it’s a very unusual institution and we have a wonderful department. The department has been extremely collegial, and valued not only Bayesian statistics early on, but also interdisciplinary work—which would become very important to me, eventually—and computation, which was obviously an important direction for statistics. It was a place that supported all the right things in statistics. It has been an extraordinarily interesting university, with top researchers, not in all areas but definitely in certain specific areas, especially those closely related to computation, and it prides itself on being inter-disciplinary. It’s been an easy place to interact with other people, and that’s really the main thing. I got to know, and work with, a lot of people at Carnegie Mellon and in Pittsburgh broadly, and I grew deep roots here. I can’t imagine having more enjoyable and enlightening colleagues.
3. What has been the most exciting development that you have worked on in statistics during your career?
Well, there’ve been two. Firstly, I was an active participant in the Bayesian revolution, which occurred during the 1990s, and just after. Secondly, I’ve also been part of what, I think, is becoming equally revolutionary, pushing statistics in neuroscience, especially the part of neuroscience that’s not neuroimaging per se. Most people think of neuroimaging when they think of statistics in neuroscience, but there’s a whole other part of it that is, I would say, closer to brain mechanisms, closer to neurophysiology. That’s what I’ve focused on, and there’s a dramatic change in the level of statistical work in neurophysiology, a change that’s still underway.
4. Could you tell us more about your work?
A little over 15 years ago, I started putting all my effort into learning about neuroscience, and how statistics could be used in neuroscience. So very broadly, again, I returned to my roots in laboratory science. I got interested in a particular part of neuroscience which is usually called electrophysiology: it is based on electrical recordings from neurons. It’s a very important part of neurophysiology, and in the history of neuroscience, and continues to be important. It’s a place where statistics can play a valuable role in moving the field forward.
5. What are you currently working on?
I’m most interested in what I’m calling the problem of dynamic neural network analysis. It involves the way a brain-like abstract network can evolve over time, and the way information flows across the network. It turns out that right now we do not have adequate statistical tools to describe the rich kind of network dynamics that are exhibited in animal and human brains. So, this is where my collaborators and I are devoting all our efforts, taking very small steps at first, but trying to build up, to be ready for very complex and rich data sets that are just now starting to be collected in neuroscience.
6. At JSM 2017 you received the R.A. Fisher Award and gave the Fisher Lecture on The Importance of Statistics: Lessons from the Brain Sciences. Please could you tell us more about this topic?
My thoughts on the importance of statistics, in the Fisher Lecture, were based on a couple of observations. There’s an initiative in the U.S. called the BRAIN initiative, and there are now initiatives in the European Union and in Asia, and other places, that are emphasizing the importance of brain science. These are typically—right now certainly in the U.S.—very heavily funded, but they are also very underserved by statistics.
When I say that, I really mean two different things. Firstly, within neuroscience, advanced methods for data analysis are developed mainly by physicists, not statisticians. The physicists are sometimes very good statisticians, but sometimes they’re not; sometimes they make very basic mistakes, and their approach to solving problems is often very different than statisticians. Secondly, there are these challenging statistical problems involving dynamic network analysis that … well, put it this way: we could have 50 statistics PhD theses in this area and we’d still have more work to do. It’s a big, hard problem.
So, the first point I was trying to get across in my Fisher Lecture was that there are some basic teachings of the statistical paradigms that are fundamental to everything we do. Although there are many people, historically, who contributed to the rise of statistics, I don’t think anybody did as much as Fisher—no one, person really did as much as Fisher to create what we now think of as the statistical paradigm (though I should add that, with my Bayesian roots, I really think of it as the Fisher-Jeffreys paradigm).
The second point was to try to explain and pose this problem of dynamic network analysis, because I want more statisticians to join the effort.
7. What does it feel like to be recognized by the Institute for Scientific Information as one of the 10 most highly cited researchers, 1995-2005, in the category of mathematics (ranked #4)?
Honestly, I first heard about being highly cited when my colleague Adrian Raftery, with whom I wrote my most highly cited paper, “Bayes Factors,” contacted me because he was editing a book on the future of statistics, around the time of the turn of the millennium, and he had prepared the author index within the book and it had turned out that I was the third-most highly cited author in that book. I wrote him back immediately and said, “Oh, you must have made a mistake.” And he wrote me again and he said, “No, I checked, and you’re tied with two others at number three.” That was the first time it ever would have occurred to me that I would be highly cited, and it was absolutely shocking. Several years later, the Institute for Scientific Information became very involved in ranking people by citations and so forth, and I was on the list of this one ranking, and then the 10 most highly cited list came along.
I have mixed feelings about it in the sense that as a sensible statistician I know that citations are a very imperfect indicator of scientific impact, but it sounds great. I admit that I love the sound of it. Rankings are especially good at getting the attention of people who don’t know anything about statistics. I think being on that list you’re asking about, in particular, has been very helpful to me in superficial ways. But my peers in the field already know me, and they have their own feelings about our collective efforts, and all the individual contributions. The citation numbers don’t play a very big role there. My biggest reward has been that people I respect are willing to talk to me about ideas I’m grappling with.
8. From your teaching experience, has the teaching of statistics evolved over the years and met the changing needs of the students?
My answer is yes and no. Yes, it’s evolved; to me the most visible part of the evolution is among a group of very dedicated statistical educators—there’s one such formal group in the US and there’s also an international group—who have meetings and so forth about the teaching of statistics.
There’s been a lot of modernization of teaching, in parallel with modernization of teaching across the board, which largely has to do in part with the influence of cognitive science, informing instructors better about how students learn. In addition, it’s grown because modern computers have created opportunities for new ways of delivering material including visualization, especially, and all kinds of games and experiments. There are many different ways people can make their classes better just in terms of the way everything is delivered.
On the other hand, statistics teaching has been very slow to evolve in terms of the way learning goals influence the content of courses. Modernization efforts mostly aim at better delivery of existing content. I think that too little attention is paid to identifying the things students really should come away with. When people talk about teaching, they very quickly delve into discussions of specific content, and what I’m talking about must precede any consideration of content. I just think there’s been way too little thought about achieving the multiplicity of learning goals by most statistics teachers.
Furthermore, the structure of academic environments encourages conservatism. There’s very little incentive to do anything truly new and different. In fact, there are barriers to that: in terms of usual structures, it’s very hard to make big changes in curricula. I find that frustrating.
9. What have been the most popular lessons that your students respond to?
I don’t know about popularity, but what I try to get across is the combination of statistical principles and statistical pragmatism. That, on the one hand, we have guiding principles that are formulated using mathematics, and on the other hand, we have a very common notion that models do not fully capture the behaviour, the things we’re trying to model. So, we have a sense that models are always imperfect—as in the famous quote from George Box, “All models are wrong, but some are useful.” It is the combination of these principles, which come from theory, together with the pragmatism, that I try to get across at every step of the way. That’s what I want students to come away with: that there is knowledge in statistics that’s informed by theory, and at the same time theory can’t apply perfectly so we should be pragmatic in order to make progress. As I said in my Fisher Lecture, this frame of mind is fundamental to citizenship, and it’s also fundamental to advanced work in statistics, as well.
10. Your research has been published in journals and books: is there a particular article or book that you are most proud of?
I’m proud of several review articles I wrote. Some people might not be, but I am. The Bayes factor paper, the paper on prior distributions. Also, I wrote an article in Statistical Science in 2011 on big picture statistics—I’m proud of that. Again, people might think that this isn’t hard core statistics; to me, it’s actually really important. It was important to me personally and represented a lot of my own time and thinking.
I have a book Analysis of Neural Data, written with Emery Brown and Uri Eden. When it was finished I felt like this was the book I was born to write. I’m very proud of it. Even though, superficially, if someone were to pick it up, the topics make it look like a typical application-oriented book on statistics, to me it’s a lot more than that because I worked so hard to give readers a sense of the practice of statistics.
11. You have also been the interim Co-Director of the Center for the Neural Basis of Cognition. Please could you tell us more about this role?
The CNBC, the Center for the Neural Basis of Cognition, is a remarkably successful enterprise that bridges University of Pittsburgh with Carnegie Mellon, and it does this by combining the biological side of neuroscience, the psychological side of neuroscience, and the computational side of neuroscience. My own interest is especially, of course, in the computational side which of course includes data analysis. I wrote a review with 24 other authors, which appeared in the 2018 Annual Review of Statistics, called “Computational Neuroscience.” It’s an overview with mathematical and statistical perspectives. So that’s been really one of the core things that I’ve been involved in here in Pittsburgh; growing computational neuroscience and growing it with a strong statistical perspective. I’ve also been heavily involved with supervising our computational neuroscience PhD program. The directorship role, well, it was essentially being a department head, and I’d already done that in statistics for 9 years so taking this on for 3 years wasn’t difficult for me.
12. What is the best book in statistics that you have ever read?
It’s a two-volume classic by Feller called Introduction to Probability Theory and Its Applications, and it’s a beautiful book. It’s beautiful because it was very consequential, it laid out the subject matter of probability theory in a unique way, but more than that, it combined rigorous mathematics with conceptual development in a way that I’ve rarely seen replicated. I wish that our field would be more imitative of Feller’s style. In fact, I came to realize that I was heavily influenced by Feller’s style in writing Analysis of Neural Data. A major objective was to blend intuitions based on theory with intuitions developed from practical applications. I wish more people would pay attention to Feller’s book.
13. What would you recommend to young people who want to start a career in statistics?
That they pay attention to and develop both theory and practice. Regardless of how they balance those two things in their career, some are more theoretical, some are more applied, they need to have an appreciation for both, and they have to understand how they interact.
14. Who are the people who have been influential in your career?
There’s my PhD thesis advisor, Steve Stigler, and my original academic advisor at University of Chicago, Paul Meier, but I also want to mention David Wallace. He was a major figure at the University of Chicago who was a passionate intellect; everyone who came through Chicago recognized his unique knowledge and perspective, but he didn’t write very much and he didn’t have a reputation among the general population of statisticians. He was very influential for a lot of people.
I’ve had a lot of wonderful colleagues in the Bayesian world, and of course I have to mention Emery Brown, who not only helped me write Analysis of Neural Data, but who has also been absolutely essential to my development in neuroscience. He’s an MD-PhD. His PhD is in statistics. He’s a practicing anesthesiologist, but he’s also become one of the leaders in neuroscience.
Copyright: Image appears courtesy of Carnegie Mellon University