Randomized Controlled Trials in Public Policy: An interview with Professor David Torgerson

Features

  • Author: Joanna Carpenter
  • Date: 01 Sep 2013
  • Copyright: First image appears courtesy of iStock Photo. Second image appears courtesy of Professor Torgerson and University of York

Professor David Torgerson, Director of the York Trials Unit, joined the Centre for Health Economics at the University of York in 1995 and became the Director of the York Trials Unit in 2002. Originally a health economist he is a trial methodologist and has published widely on the design and conduct of randomized controlled trials (RCTs) including the book Designing Randomised Controlled Trials in Health, Education and the Social Sciences.

As well as designing healthcare trials David is working across the social sciences developing policy trials and is collaborating with the Behavioural Insights Team at the Cabinet Office, and supporting educational trials and trials in criminal justice.

In June 2012 he co-wrote a Cabinet Office White Paper Test, Learn, Adapt with Laura Haynes, Head of Policy Research and Owain Service, Deputy Director of the Cabinet Office’s Behavioural Insights Team; and Dr Ben Goldacre, a Fellow at the London School of Hygiene and Tropical Medicine and author of Bad Science. The White Paper urges policymakers across government to use RCTs to evaluate and improve the effectiveness of public policy.

thumbnail image: Randomized Controlled Trials in Public Policy: An interview with Professor David Torgerson

1. What have you been doing to encourage the use of RCTs in social policy?

At the York Trials Unit we do mainly health-related trials, but for the last ten years we’ve been also trying to do trials in the social sciences, mainly education but also in criminal justice. We run an annual conference in September on trials in the social sciences and this is helping to build a network of researchers interested in doing RCTs in the social sciences. I met some political scientists who were doing trials on topics such as different methods of getting people out to vote in elections and different ways of encouraging recycling, which led to me advising the Behavioural Insights Team (BIT) of the Cabinet Office on the design and analysis of their trials.

One of BIT’s remits is to press the case for rigorous evaluation, usually meaning trials, across government. That’s how Test, Learn, Adapt came about, which aimed to encourage policymakers in different government departments to undertake trials of novel interventions rather than just following the usual way of having a policy idea and implementing it with no rigorous evaluation.

2. Are there any signs of progress?

Yes I think so. For example, the Government has given £100m to the Education Endowment Foundation to fund educational interventions and evaluate those, usually with RCTs, to show whether they are effective or not.

There are also Evidence Centres, which are loosely based on the NICE model to summarise evidence on effectiveness on different social policies to improve decision making.

There’s quite a lot of hostility to RCTs in the research community in the social sciences, because most social scientists are not very quantitative. They’ve not been taught anything about randomization. Their view of randomization is that it’s a drug trial...So the government is misadvised by quite a few senior academics who don’t understand randomized trials. So that’s a big problem.

3. I saw that you’ve co-written a paper on the differences between RCTs in health and education. What are the principal differences?

Easier

A few positive things. Educational trials tend to be easier, despite what you might hear from a lot of educational researchers, who think education is more complex than health. It’s not, once you move away from drug trials in health, which are ‘relatively’ straightforward. But even a ‘simple’ drug trial is incredibly complicated and difficult to do rigorously. For non-drug trials such a surgery have many similar issues as to an educational trial, such as lack of double blinding and variation in delivery of the intervention for example.

Smaller sample sizes

There are dissimilarities too. One positive thing in educational research is correlation between pre-test and post-test scores. If you’re doing maths intervention, for example, and you give the children a pre-test in maths, intervene and then give them a post-intervention test, the correlation between those two sets of scores tends to be really high. You get a correlation between pre-test and post-test that are typically of 0.8 or greater. Now that’s rarely the case with health. That means in education you can often do trials with smaller sample sizes than in health.

Unit of allocation

In education the natural unit of allocation is often the class or the school rather than the child or the person. You also know the membership of the trial from before you randomize. Also, teachers are good at testing children. It’s what they do every day, so getting them to follow the outcomes of the children is relatively straightforward, whereas in health, a clinician who does a surgical trial rarely does quality-of-life follow-up as routine.

So there are a lot of similarities and also some differences.

Professor David Torgerson

4. What do you think the future holds for RCTs in government? Are you optimistic that they’ll be rolled out more widely?

I hope so. The big problem with policy-level RCTs is… that the government can’t say “OK, we did a trial of x and we found that x didn’t actually work even though we thought it did” because then you get the opposition saying “we told you so, this is a U-turn”, and it’s considered a bad thing to make a U-turn, rather than the government saying “No actually, this is a good thing we’ve done, because the facts on the ground have changed, so we’re going to change our policy to match the facts.”

It’s very difficult for a government. The temptation is to carry on with a mistake just to avoid political embarrassment.

We were involved with an RCT of text messaging for the Behavioural Insights Team to increase fine repayment. That’s going to save – assuming they roll it out – millions of pounds over the years, and it’s virtually a free intervention. How much does an automated text message cost, a few pence? The chances of that being implemented I would think are quite high. So despite some backsliding over badgers the outlook for evidence based policy underpinned by RCTs is really positive at the moment.

I am more optimistic than I was ten years ago... I think austerity’s helping. There’s less money to waste then there was five years ago. Ironically, some of the best social policy trials are done in developing countries, because they have to make sure what they’re doing isn’t going to waste money. But the same should apply to us.

5. Do you have any examples?

Yes, the poverty reduction strategy called Progresa in Mexico. Some years ago, the Mexican government was persuaded to implement this policy as a RCT. Their advisers said, “Look, if you do it as a trial and you show it works, then for the next government it’s will be a lot more difficult to repeal your legislation. And of course, if it doesn’t work, why would you be upset if it’s stopped, because it will save money.”

So they randomized villages across rural Mexico. Basically, the results showed it worked very well and that sort of program has essentially been rolled out across Latin America.

The Progresa evaluation has been hugely influential in evaluation literature.

But if you look at the poverty reduction program in the UK, Sure Start, the last government said an RCT would be unethical because some poor communities would miss out, but when they came to do the evaluation they found that the actual communities that were selected by civil servants to get Sure Start weren’t as poor as the controls that didn’t get it, which totally undermines any ethical objections.

Sure Start cost a fortune, but we don’t know whether it works or not. The Mexicans can do it, so why can’t we?

6. What are the reasons have been most commonly cited for not doing RCTs? You’ve mentioned one view that it’s unethical to `deprive’ some trial participants of the intervention.

There’s quite a lot of hostility to RCTs in the research community in the social sciences, because most social scientists are not very quantitative.

They’ve not been taught anything about randomization. Their view of randomization is that it’s a drug trial. They don’t think that a more complex intervention, like teaching English, can be done.

So the government is misadvised by quite a few senior academics who don’t understand randomized trials. So that’s a big problem.

They perceive them as being expensive and long-winded. But they’re a lot cheaper than healthcare trials! I was given half a million pounds to see whether we could get rid of verrucas in children, which is a relatively trivial thing. That’s neither here nor there, but ongoing changes to the national curriculum, which is going to affect all children going to state schools, is not subject to an RCT. One is much more important than the other, yet the less important one gets randomized trial because it happens to be in health but the more important one doesn’t.

There are lots other examples. Take speed cameras, which many lobby against. If we could show with an RCT that speed cameras categorically work, that would shoot down the arguments of the motoring lobby and save lots of lives, but we’ve not done a trial. It’s just ridiculous. You could do loads of trials and get lots of really useful information for relatively low cost.

Related Topics

Related Publications

Related Content

Site Footer

Address:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.