What a Good Solution Feels Like
Intervention ideas are often scored for ‘impact’ and ‘feasibility’. This can be helpful for choosing between existing ideas, but it’s not much help when designing new ones. ‘Just make it more impactful and more feasible’ is not useful advice.
Design is about working with the tension between conflicting requirements. Designing an aeroplane is (crudely) about making it fly fast, be safe, but also carry a lot of stuff, and be able to fly a long way, and easy to handle, and not too expensive to build or operate, and so on. Becoming a good designer is learning how these trade-offs interact and how to make the best of them.
A similar dynamic is at play when it comes to designing interventions to change behaviour. But it is a bit less obvious what the trade-offs and tensions are that need to be managed. And when it comes to words like 'impact' and 'feasibility', it's easy to get lost in abstractions.
We find that there are a few specific tensions in intervention design that it's worth meditating on. As you're looking at your problem, you can ask where along each spectrum you might want to sit.
1. Feasibility vs. Safety
Feasibility is a question of 'can we do this?'. It's best assessed by trying to write down in exacting detail every step involved in delivering an intervention, down to who will click send on the email or seal shut the envelopes for the letter you're sending. Once you've done that you can start putting $ costs against different parts. But just working out the mechanics is the hard part. Reality has a surprising amount of detail.
By contrast, safety is a question of 'should we do this?' Here's a quote from the great decision theorist Baruch Fischhoff, talking to a U.S. Army audience about his years working on defense-related challenges:
"Through these experiences, I've learned enough not to be dangerous. In the sense of really just how complicated your problems are, everybody ought to know how fateful they are, and to realise you can't give any advice without an extended dialogue."
It's easy as an outsider to do more harm than good. A lot of the problems we're trying to solve on the 'feasibility' side of the equation - getting funding, getting executives’ buy-in, dealing with stakeholders on the ground, working through technical and legal constraints - is a process of switching off the various alarm systems and failsafe mechanisms that protect systems and organisations from bad ideas. We had better know what we’re doing!
So balancing feasibility and safety is our first trade-off. We want to be persistent and ingenious about finding practical ways forward on the one hand, while on the other being conscious of the complexity of the ground we're treading on, and highly motivated to learn enough 'not to be dangerous'.
2. Impact vs. Simplicity
‘Impact’ is ironically often spoken about in a passive way, as something inherent to an intervention. But impact is about crashing into things and making stuff happen. We should embrace the kinetic metaphor. This is the gunpowder in our firework (intervention) which makes it go bang.
We can get too enamoured with the 'behavioural scientist as judo master' ideal, prizing small changes that make big differences. This is really the exception, not the rule. In general, the bigger the changes and the more of them - the bigger the bang - the larger the difference you can expect. Unless very cunningly designed, an intervention that doesn't in some way feel confronting is unlikely to shift behaviour. A sign that you have done this right is that you feel slightly nervous about how it's going to be be received.
But piling on more and more bang - 'throwing the kitchen sink at it' - doesn't always make for a better answer. This is because simplicity is also a cardinal virtue of solutions design. There are at least three advantages of simplicity in a behavioural context:
- From a user perspective, making things easy and navigable if often best served by simplicity.
-
From a systems and scaling perspective, simple systems work better.
Remember Gall’s law:
“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.”
- From a project management point of view, simplicity minimises the number of failure points. Simple solutions are easier to explain to stakeholders and more likely to survive the bumpy road to implementation.
So we want an impactful solution with a lot of bang, but we want to be careful about adding a million elements to our intervention that make it complicated and unwieldy. When we look at top-tier behavioural interventions like changing the pensions default in the UK, it's often this tradeoff that they nail: in that case, a hugely impactful change that knocked millions of people onto a different and better life course (saving properly for retirement), but simple to explain and simple for the user (if not for those working in the background to make it happen).
A final point to note: a big, impactful change can be about taking something away, rather than adding something new. As Leidy Klotz has shown, these 'subtractive solutions' get systematically ignored. But better than changing a behaviour is reducing the need for it in the first place, requiring your audience to do less or read less. For example, many businesses' marketing relies on emails, and they want as many people as possible to open and click on them. It's tempting to intervene by adding stuff: more testimonials, more explanations of product features, more fancy styling, more glossy pictures. But the opposite strategy, aggressively cutting the email back to a simple core message, will often work better.
3. Specificity vs. Durability
If you were looking for a slogan that sums up a lot of social psychology research, 'context matters' might be a good choice. An intervention can work in one place but not in another. Our solutions need to be tailored to our situation.
But at the same time, great behavioural solutions have a universality about them. Richard Thaler's classic 'Save More Tomorrow' nudge, where people commit now to saving more when they have pay rises in the future, works because it responds to a general human bias for the present at the expense of the future. That core insight is durable: even in quite different contexts, in other countries with different pensions systems, it will still be relevant.
So we need to balance specificity and durability: bespoke, carefully crafted solutions that leverage enduring principles of psychology.
4. Technical vs. Empathetic
Our final tension is between behavioural science as a technical discipline, rooted in statistics and experiment, and behavioural science as systematic common sense.
We first want our solutions to be scientifically well-grounded. If we're adapting an idea from the academic literature, do we understand the theory behind it and the conditions under which it was originally implemented? Was the methodology robust? Have there been replication attempts for the same effect since? How did they fare?
But unlike physics experiments where we are making claims about the behaviour of quarks, in behavioural science we are making claims about the behaviour of humans. And we happen to be humans. So if an intervention really works, it should be possible to tell a story about why it works that makes intuitive sense to us as humans. We should be able to see "Ah yes, that might change my behaviour in that situation too". We should be able to empathise with what it's like to be this person and how they construe their situation (even if we have to rotate our perspective a little to do so).
Take Katy Milkman's 'megastudy' looking at what gets people exercising. This is one of the more technically rigorous studies ever run, carefully field-testing dozens of interventions in parallel with a sample of 61,293 people. But if we look at the best-performing intervention - offering participants little incentives to get back to the gym after missing a workout - it's something we can make sense of as humans as well as scientists. Which of us hasn't experienced that temptation to give up on a difficult new habit after the first small setback?
These two perspectives, the scientific and the empathetic, are a great check on each other. A result that we can't empathise with ("Hmm, I wouldn't have done that") is often a sign that something's gone wrong with our analysis: maybe our randomisation quietly failed and our 'effect' isn't real. Similarly, we can tell plausible stories about interventions that don't actually work. Control groups, randomisation, and replication are there to make sure we're not fooling ourselves.
Our tradeoffs in summary
Feasibility You can do this |
vs. | Safety You should do this |
Impact Throwing as much ‘bang’ as you can at the problem |
vs. | Simplicity Not too many moving parts |
Specificity Solving this exact problem in all its detail |
vs. | Durability Finding behavioural insights with truth and relevance that go beyond the current project |
Technical An answer rooted in the science |
vs. | Empathetic An answer directly accessible through common sense and empathy |
A good solution maximises these trade-offs without leaving much slack. It's a thing under tension: we should be able to hear the fibres creaking a bit.