Weird science: the role of natural experiments in the public sector

28 Jan 22

A revolution in economic research methodology has been rewarded by Nobel Prize judges. How could work on ‘natural experiments’ help the public sector?  

 

Everyone strives to make good decisions, not least in the public sector, where financial health often relies on actions working as intended. In order to make the right choices, we need to understand the consequences of decisions made in the past – how did an action or policy change people’s lives? But determining the relationship between cause and effect can be near impossible without a crystal ball through which to see what would have happened had the action not been taken or the policy not introduced.

Academics have traditionally attempted to address such questions with theory, while researchers have often turned to randomised experiments, in which two or more groups of people, including a control group, are studied to try to observe the differences in outcomes between them. However, another method that has become more prevalent in recent decades is the so-called ‘natural experiment’ – a study of measurable variations between groups of people whose experiences were not set by the researcher, but are already recorded in pre-existing data.

In October, three of the pioneers of natural experiments in the field of economics, David Card, Joshua Angrist and Guido Imbens, received the Nobel Prize in economic sciences. Their work, as well as that of the late Alan Krueger, showed that these experiments – long used in the study of health – can also tell us a lot about causation in economics. But what are the lessons for public sector policymakers?

Causal effects

Natural experiments could enable the study of many economic and societal phenomena and policies, where randomised controlled trials are often impossible, says Arthur Turrell, deputy director for research and capability at the UK’s Office for National Statistics Data Science Campus. The latter can be “impractical, unethical or expensive”, he says. “If we can find a suitable natural experiment, methods such as synthetic control, difference-in-differences and regression discontinuity provide a way to construct counterfactuals – to ask what would have happened in the absence of a policy change. This allows us to estimate the all-important causal effect of a change.”

From an ethical point of view, natural experiments are better suited than randomised trials to many policy areas because of the nature of interventions being studied, according to Vicki Sellick, chief partnership officer at UK policy and research organisation Nesta. While randomised trials might be useful for determining the effects of new medication, for instance, it might be unfair to deliberately withhold from a control group something that is likely to confer life advantages.

“There is an ethical challenge in research: when is it right to run experiments and when is it not?” she says. “And natural experiments are really helpful. It might be unethical to offer a set of schoolchildren more help or more schooling, because you have to have a control group that misses out.” For example, it could be argued that while it would not be right to force people in a study to drop out of school or university, people who left education of their own volition can ethically be studied as a group in a natural experiment.

The work of Card, Imbens and Angrist became influential in the early 1990s, and natural experiments grew in prevalence and esteem. Economics’ subsequent ‘credibility revolution’ as it is often termed (Angrist co-authored a 2010 journal article, crediting the transformation with “taking the con out of econometrics”) has made economic study more empirical, says Turrell.   

“It is called a revolution for a reason,” he adds. “It has completely changed how economists think about key areas of economics.” He gives two examples – economists believed that immigration lowers wages and that minimum wages increase unemployment, until studies by Card in the 1990s showed the reverse.

Two-step process

Imbens’ and Angrist’s conclusions then explained exactly what cause-and-effect implications can be drawn from natural experiments. Put simply, a natural
experiment can be thought of as randomly dividing people into the ‘treatment’ and ‘control’ groups. A two-step process can estimate the effect of the programme or policy under investigation: first, work out how the probability of participation is affected by the experiment; then take this into account when evaluating the effect of the intervention.

The researchers developed the ‘local average treatment’ effect, which estimates the total number of people in a study who were actually affected by the experiment. Some people might not have had their behaviour changed – they might have stayed in school, attended a service or chosen to eat in a certain way, for example, regardless of the policy under investigation. Researchers will know the numbers of people who did or did not act in a certain way, but they will not know their reasons for doing so. Therefore they cannot know whose behaviour was changed by the experiment, and thus affected by the intervention. The local average treatment effect estimates this, allowing researchers to draw conclusions.

“Imbens’, Angrist’s and Card’s contribution is great and I have been waiting for them to win the Nobel Prize every year,” Turrell says. “This is very well deserved. Their work allows us to bring data to understanding some of the really really big questions in economics. We do not have to guess with a theorem, which is still important to map out ideas, but we can then go to empirical analysis and get the real answer – at least for that particular time in those particular circumstances.”

Public spending

“Real-world observations can be extremely helpful in determining the impact of public spending,” says Richard Lloyd-Bithell, senior policy and technical manager at CIPFA. ”At a macro level, natural experiments have been used to look at differences in economic or fiscal policy between jurisdictions – differential income tax rates between Scotland and England is a prime example from the UK. However, they can also be helpful in determining changes in demand for – or use of – public services, and the most effective use of the public pound.”

Lloyd-Bithell says public health policy interventions, such as food labelling, advertising, smoking bans and targeted taxes, are often evaluated this way. “Such natural experiments or ‘nudge’ approaches can be helpful in terms of providing evidence for more ‘downstream’ interventions, which can affect public spending through demand for public services and improving value for money, as prevention is often more cost-effective than treatment,” he explains.


The emerging field of causal machine learning is exciting, because machines are very good at predicting and can process a lot of data, so our understanding of causal relationships can only grow
- Arthur Turrell, Office for National Statistics Data Science Campus


Natural experiments are also becoming more prevalent in other areas of policy study, according to Heikki Hiilamo, professor of social policy at the University of Helsinki in Finland. He says that a “new kind of thinking” in policymaking – putting more emphasis on evidence – is emerging, albeit slowly. “I see some positive developments and some more educated policymakers,” he says. “Recent studies have shown that pre-existing assumptions do not always hold. People do not behave the way they are expected to, and this Nobel Prize has promoted that kind of thinking.”

However, the political cycle restrains research, according to Hiilamo. “When we have governments in office for terms of four or five years, they want to get results before their mandate is over,” he says. “These studies are often helpful not for the present government but the next one, and the current government does not know who that is going to be, or whether they want to help them.”

Politicians and researchers need to work together better to ensure that our improved understanding of causality translates into real benefits for society, says Diane Coyle, Bennett professor of public policy at the University of Cambridge. “Policymakers do not always take academic research findings on board,” she says. “The research might run against their beliefs or what they promised in a manifesto.”

“There is also a danger of academics over-generalising or placing more weight on the research than it can bear in real life, when the context might be different or there are other complications,” Coyle says. “Academics need to have a sophisticated understanding of political context and politicians, while officials need to treat research more seriously.”

More access to data

To accelerate the wider use of natural experiments and evidence-based policymaking, public servants need more access to the data that exists around them, according to Sellick. “Potential natural experiments are everywhere, because the point is that the dataset often already exists,” she says. “Making use of it is partly about knowing it is there in the first place, partly about data analytics skills in the public sector, and partly about ensuring they have access to data not just in their organisation but from other services, civil society and the private sector as well.”

“The question to ask is ‘What next?’” says Turrell. “I think that is going to come at the intersection between data science and economics. The emerging field of causal machine learning is exciting, because machines are very good at predicting and can process a lot of data, so I think our understanding of causal relationships can only grow.”

Whatever comes next, the contribution of Card, Imbens and Angrist to our understanding of causality is undeniable, and the Nobel Prize win cements that legacy. Striving for better decision-making should be at the heart of a forward-looking public sector, and while we must acknowledge the real-life difficulties and pressures that research cannot account for, basing government decisions on ever-more-solid evidence is surely a good thing for the people that policies aim to help.


Research findings

Minimum wage

Alan Krueger and David Card’s work on the minimum wage challenged the economic orthodoxy that increasing salaries for the lowest-paid workers reduces employment. The accepted thinking at the time was that employers would cut jobs when forced to pay their staff more. The pair looked at the impact of New Jersey raising its minimum wage from $4.25 per hour to $5.05 in April 1992, surveying 410 fast-food restaurants both there and in Pennsylvania, where the minimum wage was unchanged. They also looked at changing employment levels in shops in New Jersey that were already paying wages above $5 and compared them with those in shops that were directly affected by the increase. They found no indication that the rise in minimum wage reduced employment in either case. In fact, they even saw a small increase in employment in New Jersey’s fast-food restaurants, but concluded it was not statistically significant.


Real-world observations can be extremely helpful in determining the impact of public spending
- Richard Lloyd-Bithell, CIPFA


Immigration

Card also studied the effect of immigration on the labour market – which is difficult, because immigrants are likely to choose areas with employment opportunities, so simply comparing regions with high and low immigration cannot provide evidence of a causal relationship. But a unique event provided the opportunity for a natural experiment, when in 1980 Fidel Castro allowed Cubans who wished to emigrate to do so, leading to 125,000 people leaving in five months, with many moving to Miami, Florida. Card looked at wage and employment trends at the time in Miami and four other cities, finding no negative effect on Miami residents with low levels of education – the demographic believed to be at risk from high immigration.

Did you enjoy this article?

Related articles

Have your say

Newsletter

CIPFA latest

Most popular

Most commented

Events & webinars