“What young statisticians should resist is the hypocrisy of stealth advocacy:” An interview with Andrea Saltelli

Andrea Saltelli has worked on physical chemistry, environmental sciences, applied statistics, impact assessment and science for policy. His main disciplinary focus is on sensitivity analysis of model output, a discipline where statistical tools are used to interpret the output from mathematical or computational models, and on sensitivity auditing, an extension of sensitivity analysis to the entire evidence-generating process in a policy context. A second focus is the construction of composite indicators or indices. Till February 2015 he led the Econometric and Applied Statistics Unit of the European Commission at the Joint Research Centre in Ispra (I), developing econometric and statistic applications, mostly in support to the services of the European Commission, in fields such as lifelong learning, inequality, employment, competitiveness and innovation. He participated to the training of European Commission staff on impact assessment.

At present he is in at the European Centre for Governance in Complexity, a joint undertaking of the Centre for the Study of the Sciences and the Humanities (SVT) – University of Bergen (UIB), and of the Institut de Ciència i Tecnologia Ambientals (ICTA) -Universitat Autonoma de Barcelona (UAB). The ECGC is located in the UAB campus in Barcelona. He is also the co-author of many Wiley books including the best-selling Global Sensitivity Analysis: The Primer.

Statistics Views talks to Dr Saltelli about his career in statistics.

1. You studied inorganic chemistry and physics at Rome University, but when and how did you first become aware of statistics as a discipline and what led you to change direction to become an applied statistician?

After a spell in chemical kinetics, I was given a job in modelling. It was soon clear that we had more models than ways to test their relevance. This is how uncertainty and sensitivity analysis came into the play and I was quickly taken with it. An economist said just a few days ago in the Financial Times, “When using models for policy advice, one must at least verify that unrealistic assumptions do not lead one astray”. In retrospect, I can say that this happened quite frequently in our line of business – we were modelling how to make a nuclear waste disposal safe. Although I no longer believe that modelling is the solution to the problem of nuclear waste disposal, the methods we developed for sensitivity analysis were disarmingly candid. They had the potential to tell model developers what to expect, how to exert restrain, and when to stop in terms of model complexity. Whether these tools were in the end taken up by the modellers’ community at large is another story. When I started this work in the 1980s, sensitivity analysis practitioners did not even know one another – separated by disciplinary divides. In time we developed into one community and this is something I enjoyed contributing to – in the end I felt more part of that than of Chemistry or Physics circles.

2. From 1991-2005, you played a key role in the Joint Research Centre’s (JRC) of the European Commission. What are your memories when you look back on this time and what were your main achievements?

The European Commission’s Joint Research Centre has been an ideal experience – in a sense a safe haven, a place where one could work on interesting problems – of direct relevance for policy – and still have an academic life. So I continued my sensitivity analysis research why working on many different things. I remember doing cloud micro-physics and chemistry related to the role of the sulphur cycle on climate, and the interest on climate and its models has stayed with me till today.

3. From 2005 until early 2015, you were Head of the Unit of Econometrics and Applied Statistics. How did the Unit develop over this time period and what kind of changes did you oversee?

These were ten embattled years. Perhaps I was too impatient in starting new activities which I thought JRC had to be involved in, and this then led to frictions with my own institution. In the end though – to the praise of JRC’s capacity to adapt and transform – all these activities have blossomed, and are considered today central to the mission of JRC: financial econometrics, educational research based on indicators and benchmarks, in general quantitative analysis in the context of impact assessment and policy support. Just before I left, this also included a dynamic group working on counter factual evaluation in the field of social cohesion policies. All this was possible thanks to a group of very talented and tenacious collaborators and the positive reception of these activities in the broader house, i.e. in the various branches and services of the European Commission.

As a team, we also made a major effort to systematize the statistics underpinning sensitivity analysis as well as those used in the construction of composite indicators. This latter has today become a flagship competence centre of the JRC. Last but not least a small team of scholars of science and technology studies – which was more or less hidden among the econometricians and applied statisticians, managed to emerge and is organizing today some serious debates in the organization.

Working on impact assessment made me critical of the way quantification is used and abused in policy studies. With the guidance of ‘wise’ friends, such as Jerome R. Ravetz and Silvio Funtowicz, I read what philosophers and historians of science had to say on the matter. One can lie with numbers, with statistics – as the famous phrase says. One can even lie with sensitivity analysis, I discovered; no tool is fool proof, no technique is neutral – every quantification belongs to a normative framework.

Broadening the analysis of how mathematical and statistical modelling can be used in impact assessment has led to ‘sensitivity auditing’, where the framings, the power relations and the motivations of analytic studies has become an explicit subject of investigation.

Many old ideas had to be revisited in this process, including the faith in the neutrality, objectivity and impartiality of science, its independence from power and policy, and its capacity to solve all practical problems. One of the last discussions I participated in within my organization was to encourage departure from comforting ideas on the salvific role of innovation in the context of the present recession, and also to instil doubt on the capacity of modern Economics as being regarded as a master discipline which can adjudicate disputes and inform human actions and decisions. This was helped by personal acquaintance with historian Erik R. Reinert and my readings of the works of Philip Mirowski.

4. What are your current research interests? Are you working on anything at the moment?

As a proof of my irresoluteness I maintain a foot in two shoes. I cannot help being still attracted by algorithms for proper sensitivity analysis, while living intensely the present turmoil on science’s governance. I contributed to a book appearing these days where this subject is diagnosed.

…statistics is badly needed at the heart of the scientific enterprise and is at the same time caught in the storm which is impacting science for policy. We live in times ripe with controversies centred on the value of evidence. Here there is plenty of scope for statisticians.

5. You have received awards and honours from the E. Clementel Prize to giving the commencement speech to the faculty of Berkeley. Is there a particular award or honour that you are most proud of?

The Clementel grant allowed me to spend a rewarding year at the Argonne National Laboratories, near Chicago. This was a unique opportunity to learn from wiser colleagues, some old enough to have participated to the Manhattan project. The commencement speech at Berkeley, for which I am indebted to statistician and friend Philip Stark, was a real enjoyment. The challenge was to talk to young statisticians while not boring their families sitting in the audience. I hope a passing quote from the TV series ‘Battlestar Galactica‘ helped.

6. You have authored many publications including four books for Wiley on sensitivity analysis. What are the articles or books that you are most proud of?

The papers written with Russian mathematical Ilya M. Sobol are surely among those which gave me the most pleasure. The staff at Wiley were good companions to work with, and we have published several books together, though I think book prices in general – not specifically Wiley’s – are still forbiddingly high. The articles I am most proud of are probably those I haven’t written yet.

7. What advice would you offer to today’s students who hope to become statisticians?

I would tell them that they shall live adventurous times, as statistics is badly needed at the heart of the scientific enterprise and is at the same time caught in the storm which is impacting science for policy. We live in times ripe with controversies centred on the value of evidence. Here there is plenty of scope for statisticians. There is lots of statistical work needed to tackle the reproducibility problems just flagged – think of the practice known as p-hacking. Uncertainty is now routinely fabricated or concealed by opposing factions. I said that I do not believe in an algid, detached science, so sides have to be taken. What young statisticians should resist is the hypocrisy of stealth advocacy – the illusion of speaking truth to power.

Instead, statisticians should strive to ask the right questions. The framing of any analysis is the result of a societal negotiation and inquiries, rather than knowledge discovery as in traditional science.

One should also resist the hubris resulting from education and the learning of powerful tools. Risk is different from uncertainty, which is different from indeterminacy. A statistician can make wonders with the craft of Bayesian calculus but should be careful to present this as solution to the problem of genuine uncertainty. Humility is part of the solution.

Policy based evidence is the flip side of evidence-based policy. Statisticians should learn to engage in multi-disciplinary and multi-stakeholders settings where a multiplicity of legitimate views are presented in an adversarial context. As taught by physicist Richard Feynman, all scientists – including statisticians – should bend backward to ensure that their work can be replicated and possibly proven wrong. There are plenty of courageous initiatives out there where the work of cleansing the system of its internal corruption has just started. Investigators such as John Ioannidis and Philip Stark offer an example to follow.

8. What has been the most exciting development that you have worked on in statistics during your career?

Still remaining in two shoes I think I have helped to develop some reliable tools in global sensitivity analysis, helping to disseminate good practices. The fact that both sensitivity analysis and sensitivity auditing have been retained by the European Commission in its toolbox for impact assessment is encouraging. The work on composite indicators also gave me great satisfaction. As an amateur epistemologist, I am now enjoying my reading of science and technology studies (STS) and my collaborations with the Centre for the Study of the Sciences and the Humanities (SVT) at the University of Bergen, and with the Institut de Ciència i Tecnologia Ambientals (ICTA) at the Universitat Autonoma de Barcelona, where I am presently based in the lovely campus of Bellaterra, near Barcelona.

The book on the crisis of science – titled Science on the Verge – which has just come out, has been a major effort that I intend to continue in the coming years.

9. What led to the writing process of Science on the Verge?

For many years, my colleagues and I have been reading concerned editorials and detailed analyses of major problems in the control of the quality of scientific work in both social and natural sciences.

Science qua science seems today to have lost control of its own quality control mechanism. Not a day passes without the academic press launching one more alarm about failed reproducibility, increasing retractions, serious issues with the peer review system and the metrics used to appraise research. The manipulation of statistics was often part of the picture.

These dysfunctions have not been without consequences for science for policy, impacting important aspects of our daily life, from nutrition to medical research, from the adoption of new technologies to economic policy.

What is unique to this period is that the media themselves jump into the fray and bring these perils to the attention of the general public. Trust in science is likely to be affected, and this generates a completely new situation given the central role of science in ensuring legitimacy of our governments.

Our impression was that a proper appraisal of the crisis and of its societal impact was still missing.

10. Who should read Science on the Verge and why?

Scientists, policy-makers and media professionals should read it. The greatest hope is that the book will also be useful to ordinary readers, who wish to make sense of what is happening in the house of science.

11. Were there any elements to writing Science on the Verge that you found more challenging and if so why?

Possibly the challenge was putting together a group of thinkers with different backgrounds and sensibilities and producing together a coherent account of an unfolding and poorly understood process. The first chapter was for me the most difficult as we needed there to make a convincing case of the reality of the problem, and of its genesis. Thinkers such as Derek de Solla Price and Jerome R. Ravetz – one of the authors of this work, have both put forward what in retrospect we may call a prophetic vision of how the problems would arise. How this will play out in the years to come makes for a fascinating investigation which the book tries to start.

12. What do you see as the greatest challenges facing the profession of statisticians in the coming years?

As I mentioned earlier, there is a considerable role for statistics in the present crisis of science’s own quality assurance system – I mentioned already a few people bravely taking up these challenges. According to Elijah Millgram’s recent work ‘The Great Endarkenment’, all scientific disciplines run the risk of becoming logical aliens to one another, populated by serial hyperspecializers who are incapable of communicating and divided by different standards of argumentation and proof. Millgram, in a sense, echoes de Solla Price’s warning about the possible senility of science, as being a victim of its own success, growth and specialization. This is one of the great dangers of education – of producing hyperspecializers of sort, scientists bent on forcing “nature into the conceptual boxes supplied by professional education” as famously noted by Thomas Kuhn. This is the real challenge for the statistical community. Given statistics’ role of great hinge between social and natural sciences, between mathematics and policy, we all hope statisticians will be up to the challenge of working across disciplines.


Copyright: Image appears courtesy of Dr Saltelli