“I am so glad I got into statistics and have the opportunity to use computers while working on engineering problems”: An interview with William Meeker

William Q. Meeker is Professor of Statistics and Distinguished Professor of Liberal Arts and Sciences at Iowa State University. He is a Fellow of the American Statistical Association, The American Society for Quality, the American Association for the Advancement of Science and an elected member of the International Statistics Institute. A former editor of Technometrics and co-editor of Selected Tables in Mathematical Statistics, he is currently Associate Editor for Technometrics andLifetime Data Analysis.

He is co-author of the Wiley books Statistical Methods for Reliability Data with Luis Escobar (1998), and Statistical Intervals: A Guide for Practitioners with Gerald Hahn (1991), ten book chapters, and of numerous publications in the engineering and statistical literature. He has won ASQ’s Youden Prize five times and its Wilcoxon Prize three times. He received ASA’s Best Practical Application Award in 2001 and the ASQ Statistics Division’s W.G. Hunter Award in 2003. In 2007 he was awarded the ASQ Shewhart Medal. Meeker has done research and consulted extensively on problems in reliability data analysis, reliability test planning, accelerated testing, non-destructive evaluation and statistical computing.

Statistics Views talked to Professor Meeker after he gave the ASA Deming Lecture at JSM 2015, Seattle.

 

1. When and how did you first become aware of statistics as a discipline and what was it that inspired you to pursue a career in statistics?

When I was a youngster, I was a big baseball fan and I would watch it on TV and listen to it on the radio whenever I could. At the end of a broadcast, the announcer would always said, “Our statistician today is…” so I guessed this was the person keeping track of keeping track of batting averages and such things. As an undergraduate, I took a couple of courses in statistics but that did not give me an appreciation that there was a discipline called statistics. That realisation came to me when I was in graduate school studying operations research, and I was invited be an intern at General Electric Research. I spent a summer working with a group of statisticians and there became aware of what statisticians really do. It was a wonderful experience for me because I got my feet wet working on applied problems and I was also invited to participate in a number of consulting visits where I could see statistics in action.

2. You are currently Professor of Statistics and Distinguished Professor of Liberal Arts and Sciences at Iowa State University. Over the years, how have your teaching and research motivated and influenced each other?

My philosophy on teaching is to motivate things with real examples and I have wanted to do that ever since the first day that I got into the classroom so I was lucky to have had my experience as an intern. The textbooks at that time were really quite horrible! They used silly examples like dice and cards which just wasn’t very interesting. Back then I was finding my own examples to replace those. I still try to integrate my research and consulting experiences and things I learn when I come to conferences into the courses so that students get a sense of what is real, important, and useful.

3. Your research interests include problems in reliability data analysis, survival analysis and statistical computing. What are you focussing on currently and what do you hope to achieve through your research?

The big thing that I have begun to work on with some of my current and former students is modern reliability field data modelling and analysis. One paper we wrote had the title “Reliability Meets Big Data” because what is changing is that many systems are today being outfitted with sensors and communications technology allowing them to generate huge amounts of data about how systems are being operated and there are tremendous opportunities to use such data to improve the way that we make reliability inferences and predictions. We have already completed several such projects and I am getting involved in some new ones, which is very exciting.

4. You have co-authored the books Statistical Methods for Reliability Data for Wiley, as well as Statistical Intervals: A Guide for Practitioners (again Wiley) and of numerous book chapters and publications in the engineering and statistical literature. Do you continue to get research ideas from statistics and incorporate your ideas into your teaching? Where do you get inspiration for your research projects and books?

Almost all of my research and examples in our books come from my experience with practical problems in industry. I mentioned that I was a summer intern at GE which continued for three summers. Even after I graduated and went to Iowa State I visited GE for a couple of weeks in summers to continue my collaborations with the statisticians there. Then I received an invitation to visit Bell Laboratories for entire summers and work in their Centre for Quality and Productivity which included reliability. Those visits also continued and I was invited back for fifteen years in a row, working in telecommunications applications and reliability. I would go off and do these things in the summer and often find projects for my students to work on after I returned to Iowa.

Most of the research that I have been involved with is collaborative research, sometimes with people in industry but particularly with my students. My students love to have these real problems instead of just working on something that I made up that I felt we should do. I have been very fortunate to have had those kinds of experiences. Bell Labs began to disintegrate in the late 1980s and I continued to go until 1992 but by then, most of the people I was working with had left. After that and up until today, I often get involved in consulting problems and collaborators in the College of Engineering at Iowa State and am always looking for something novel to start off a research project. When consulting for a company, often I will do the quick and dirty solution myself and then give the problem to a graduate student who can refine it, extend it, formalise it and write it up. When you are working on real problems like that, it almost guarantees impact because you are working on something that has an immediate application.

5. You are also currently an Associate Editor for Technometrics. What makes Technometrics different from other journals in the field?

Technometrics has always been focussed on applications of statistics in the physical and engineering sciences, which is really where my primary interests are. It’s a great journal because it focuses on problems that are important, and indeed one of the criteria for acceptance for the papers is that realism is an element, rather than an esoteric problem, and that has been the tradition of Technometrics for more than 40 years.

6. Who should be reading the Journal and why?

The target audience is applied statisticians and engineers working in statistics, particularly those working in industry. The journal is also aimed at statisticians who are involved in research at universities. Technometrics helps them keep up-to-date with the cutting edge research that is happening in the engineering sciences and the new statistical methods that are being developed to handle those kinds of problems.

One of the things that separates Technometrics from other journals, I believe, is that we don’t just make accept/reject decisions on papers. If we see something that has potential but is not quite there yet, particularly from some of the younger people in the discipline, we will help guide the authors toward making the paper better and bringing it up to standards.

7. What has been the most exciting development that you have worked on in statistics during your career?

Back in the mid-1980s, when I was working at Bells Labs in the summers, I was able to get involved in some of the reliability modelling and data analysis in a project that was going to support the first undersea digital cable using modern electronics. Up until that time undersea cables still used analogue technology, partially because engineers were concerned about the reliability of solid state devices. They understood vacuum tubes and they had been using them for 30 years in that kind of application, so this new undersea cable had to be designed very carefully so that it would have high reliability. Engineers were gathering together a new kind of reliability data that really had not been very widely used, which we call ‘degradation data’ today. The traditional reliability data would be failure times for units that failed and surviving times for units that didn’t fail. Then we can use such data to make inferences about the reliability of a component. In degradation data, you are actually measuring how units progress toward failure, so there is a tremendous amount of additional information there.

With repeated measures degradation data you track units over time. An example would be telecommunications lasers which were designed for constant light output but the current would go up over time. Eventually, the current needed to operate the laser would be too high and the unit would be declared as a failure. Engineers did not know how to take the degradation data and turn them into lifetime inferences. One of my graduate students and I took up the problem and we developed the first formal statistical methods to make life-time inferences from degradation data. Today it’s fairly commonplace but it was really exciting to be on the ground floor of something like that.

…when Taguchi told us how to appreciate and reduce variability in product manufacturing some 35 years ago, there were people who followed Taguchi and people opposed Taguchi and his methods. It got very nasty at times but now we are all working together on the same page saying, “Look, if we combine Taguchi’s ideas with statistics, we can do some really good things.” And it worked. I hope the same will happen before too long, giving us a synthesis of Big Data, tools from computer science, machine learning, statistical methods, and statistical thinking. Right now we are seeing some isolated turf battles so I see this as both a challenge and an opportunity.

8. What do you think the most important recent developments in the field have been? What do you think will be the most exciting and productive areas of research in statistics during the next few years?

Over the past 25 years or so, there has been a revolution in statistics and that revolution has been driven by the ability to apply Bayesian methods to a wide range of practical problems and do that very effectively. This is having a tremendous effect on applied statistics but the Bayesians have a problem – if you have a very small amount of data and you are going to apply Bayesian methods, you have to specify a prior distribution. If you do not have or do not want to use strong prior information, you need to specify a diffuse or vague prior. But doing this is very hard because there are lots of ways of specifying that diffuse prior information and final answers can be highly sensitive to the way in which the specification is done. That’s not good. Although I’m not directly involved, there are efforts today to try and solve that problem in a general manner, and it is being attacked from several different directions. I think that eventually, the theoreticians will figure how to solve this important problem.

9. What do you see as the greatest challenges facing the profession of statistics in the coming years?

I would say that there are challenges and opportunities. First of all, we are all hearing about Big Data and how do we handle Big Data? We need to face the challenges Big Data brings. Two groups of people are working on Big Data – to simplify, we could think of them as statisticians and computer scientists. Some people are calling themselves data scientists. Some statisticians are asking, “aren’t we the data scientists?” There is some friction here and we’ve seen this in the past in other contexts within statistics. For example, when Taguchi told us how to appreciate and reduce variability in product manufacturing some 35 years ago, there were people who followed Taguchi and people opposed Taguchi and his methods. It got very nasty at times but now we are all working together on the same page saying, “Look, if we combine Taguchi’s ideas with statistics, we can do some really good things.” And it worked. I hope the same will happen before too long, giving us a synthesis of Big Data, tools from computer science, machine learning, statistical methods, and statistical thinking. Right now we are seeing some isolated turf battles so I see this as both a challenge and an opportunity.

10. You have achieved many distinguished accolades over the years, including ASQ’s Youden Prize (five times) and ASQ Statistics Division’s W.G. Hunter Award. What is the achievement that you have been most proud of in your career?

I felt the greatest excitement when I was elected a Fellow of the American Statistical Association some years ago. Having my research and other contributions to the discipline recognised in that way was a wonderful feeling.

11. What has been the best book on statistics that you have ever read?

That is a very difficult question to answer. When I graduated from graduate school, my father asked me what I would like as a graduation present and I said, “Some books!” I love books and I always have piles of books sitting around my desk at home, relating to the projects I am working with and so forth. The two volumes on probability by William Feller were wonderful books. They are classics, written a long time ago but are still highly relevant. Anything that I have ever needed to know about probability, I have been able to go there and find it.

When I started working in reliability, while I was an intern at GE, I had access to the detailed notes that later became Wayne Nelson’s books on reliability – all three of them. I went through those notes page by page and gave Wayne a little feedback and I received copies once the books were published. Those books mean a lot to me – I learned so much from them that has helped me do what I do now in reliability.

Finally, D.R. Cox and his colleagues published a whole series of monographs that were very small in terms of size but very valuable in terms of content. Those books probably had the highest ratio of useful information to weight! My favourite was the book by David Cox and David Oakes called Survival Analysis and I would take that book with me on vacations because it was so small, you could just easily pack it and pull it out to read while traveling!

12. Your lecture here at JSM 2015 is the ASA Deming Lecture. Could you please tell us about your theme for the lecture and what you wanted your audience to take away most from the lecture?

I kind of cheated when I wrote my talk. I did two things – the first was to say something about Deming because I think eventually we are going to run out of statisticians who had a personal contact with him. I had one contact with him myself and I talked about that and the related influence that Deming had on me, particularly when writing the first chapter of Statistical Intervals where Gerry Hahn and I used the framework that Deming had set out, which he called enumerative and analytical studies, to outline how to view the assumptions of statistical inference. I do not think that Deming agreed with the way that we used that framework but it’s the way we set it up so that we could tell practitioners what you need to worry about when applying statistical intervals.

Then I turned to the relationship between quality, which was one of Deming’s primary areas, and reliability. Reliability can be viewed as quality over time. After making that connection I talked about the Big Data and reliability and the use of the dynamic covariate information that we are now getting because of the sensors in systems that I mentioned earlier and how we will be able to use such data to develop predictive models that will allow us to reduce maintenance costs of a fleet of systems like aircraft and at the same time, improve safety.

13. Are there people or events that have been influential in your career?

There are two people who have had the biggest influences on me – one is Gerry Hahn, my co-author on Statistical Intervals. Our relationship goes back to my first internship at GE and we’re still collaborating today. For example, we’re working on the second edition of Statistical Intervals that we hope to finish off in a few months. It has been fun to collaborate with Gerry over the years. He taught me much about statistics, but also how to write. People say that I can write well but I owe almost all that to Gerry. He insisted that I write the first draft of papers and chapters and then I passed it to him and he made all kinds of corrections and sent it back to me, we’d go back and forth, and I learned how to write through that process. I am still learning from Gerry as we complete this second edition.

Luis Escobar has also been highly influential in my career. Luis was my first PhD student. Luis and I were working together as colleagues even when he was a graduate student. We have worked together on many research projects and we went on to write the Statistical Methods for Reliability book in 1998. Luis has talents that complement mine. Luis is extremely smart and is able to pay close attention to details. I have learned much from him over the years; we have had an extremely successful collaborative relationship.

14. If you had not got involved in the field of statistics, what do you think you would have done? (Is there another field that you could have seen yourself making an impact on?)

At one point, I wanted to be an engineer. My grandfather had been an engineer and a radio pioneer and he designed and manufactured high-end radios back in the 1920s that only very wealthy people could afford to buy. Today there are very few of them in the antique radio market because they didn’t make very many of them. But he lost everything in the Great Depression, unfortunately. When he found out that I wanted to be an engineer, as I used to love building electronic devices, he said to me, “That’s fine but you should have something on the side to make a living like chicken farming!” So industrial management was my undergraduate major and I got attracted to computers during the first semester. Every semester I would choose my courses so that I would have access to the only computer on campus. I just loved programming and the summer was withdrawal for me because access to computers back in the late 1960s was limited! So I thought that I really wanted to become a computer programmer. But then I realized that programming by itself is pretty dry and what I really liked was solving problems with the computer. This led me to operations research and then to statistics. I don’t think I would have survived as a programmer and am so glad I got into statistics and have the opportunity to use computers while working on engineering problems!