Article Text

Download PDFPDF

What we know about designing an effective improvement intervention (but too often fail to put into practice)
  1. Martin Marshall1,
  2. Debra de Silva2,
  3. Lesley Cruickshank3,
  4. Jenny Shand2,
  5. Li Wei4,
  6. James Anderson2
  1. 1Department of Primary Care and Population Health, University College London, London, UK
  2. 2Evidence Centre, London, UK
  3. 3Essex County Council, Chelmsford, UK
  4. 4Research Department of Practice and Policy, UCL School of Pharmacy, London, UK
  1. Correspondence to Professor Martin Marshall, Department of Primary Care and Population Health, University College London, London E201AS, UK; martin.marshall{at}ucl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Intervening to change health system performance for the better

It is temptingly easy to treat improvement interventions as if they are drugs—technical, stable and uninfluenced by the environment in which they work. Doing so makes life so much easier for everyone. It allows improvement practitioners to plan their work with a high degree of certainty, funders to be confident that they know what they are buying and evaluators to focus on what really matters—whether or not ‘it’ works.

But of course most people know that life is not as simple as that. Experienced improvers have long recognised that interventions—the specific tools and activities introduced into a healthcare system with the aim of changing its performance for the better1—flex and morph. Clever improvers watch and describe how this happens. Even more clever improvers plan and actively manage the process in a way that optimises the impact of the improvement initiative.

The challenge is that while most improvers (the authors included) appreciate the importance of carefully designing an improvement intervention, they (we) rarely do so in a sufficiently clever way. In this article, we describe our attempts as an experienced team of practitioners, improvers, commissioners and evaluators to design an effective intervention to improve the safety of people living in care homes in England. We highlight how the design of the intervention, as described in the original grant proposal, changed significantly throughout the initiative. We outline how the changes that were made resulted in a more effective intervention but how our failure to design a better intervention from the start reduced the overall impact of the project. Drawing on the rapidly expanding literature in the field and our own experience, we reflect on what we would do differently if we could have our time again.

A practical case study—an initiative to improve the safety of people living in care homes

A growing number of vulnerable older people are living in care homes and are at increased risk of preventable harm. We carried out a safety improvement programme with a linked participatory multimethod evaluation2 in care homes in the south east of England. Ninety homes were recruited in four separate cohorts over a 2-year period. Our aim was to reduce the prevalence of three of the most common safety events in the sector—falls, pressure ulcers and urinary tract infections—and thereby to reduce unnecessary attendances at emergency departments and admissions to hospital.

In the original proposal submitted to the funding body, we described a multifaceted intervention comprising three main elements:

  1. The measurement and benchmarking of (i) the prevalence of the target safety incidents using a nationally designed tool called the NHS Safety Thermometer3 and (ii) rates of emergency department attendances and hospital admissions using routinely collected data.

  2. Training in quality improvement methods provided initially by a team of NHS improvement advisors and then, using a ‘train the trainer’ model, by practitioners working with or in the care homes.

  3. The use of a specially adapted version of the Manchester Patient Safety Framework,4 (Marshall M, de Silva D, Cruickshank L, et al. Understanding the safety culture of care homes; insights from the adaptation of a health service safety culture assessment tool for use in the care home sector (submitted to BMJ Qual Saf, August 2016), a formative assessment tool which provides insights into safety culture for frontline teams.

The intervention was underpinned by a strong emphasis on support and shared learning using communities of practice and online resources facilitated by the improvement team.

The programme theory hypothesised that the three main elements of the intervention (benchmarking, learning improvement skills and cultural awareness) would reduce the prevalence of safety events, that this would lead to a reduction in emergency department attendances and hospital admissions and that both outcomes would reduce system costs as well as improving the quality of care for residents. The intervention was co-designed by improvement researchers in the evaluation team, the improvement team in the local government body responsible for commissioning care home services and a senior manager of one of the participating care homes. The design was influenced by a combination of theory, published empirical evidence and the personal knowledge and experience of the commissioners and care home manager.

We built in a 6-month preparatory period at the start of the programme, prior to implementing the intervention with the first cohort of care homes. This period was used to recruit staff, establish the project infrastructure and build relationships between the care homes and the improvement and evaluation teams. Only when the programme formally started did we begin to expose some of the deficiencies in the planned intervention. Table 1 describes the different components of the intervention, whether it was part of the original plan or introduced at a later stage, and, based on our participatory evaluation, how it was implemented and the extent to which it was used.

Table 1

The original intervention and how it evolved

The evaluation found that four of the nine original components of the intervention were not implemented as planned and two were only partially implemented as planned. Only three of the nine were implemented in line with the original proposal. Five of the six new intervention components, designed and implemented while the initiative was taking place, were fully implemented. Qualitative evaluative data, collected using interviews, surveys and observations, demonstrated changes in the attitudes of frontline staff to safety and changes in their working practices. However, quantitative data suggested only small and variable changes of questionable statistical significance in the prevalence of safety incidents, and no impact on the background rising rates of emergency department attendances and hospital admissions.

Success or failure?

Perhaps we should not be too hard on ourselves. On the surface at least, our intervention was more sophisticated than that seen for most improvement projects.5 The multifaceted intervention had complementary measurement, educational and culture-change elements and was co-designed by a wide group of stakeholders, including a practitioner and experienced improvement science academics. We based the design on a reasonable programme theory and an explicit logic model. We recognised the need to adapt off-the-shelf tools to the local context and to build in a preparatory period prior to formally evaluating the intervention. And we purposefully chose a participatory and formative evaluation model to support a feedback cycle as the initiative progressed.

As a project team, we thought that we had designed the original intervention thoughtfully and carefully but the findings of our evaluation suggested that we could have done a lot better. Reflecting towards the end of the programme, we considered a number of possible explanations: we did not put enough time and effort into designing the intervention; we designed a sound intervention which was not implemented sufficiently well or was implemented without an adequate understanding of the context and our expectations were naïve that an intervention at such an early stage of development would have a significant impact. We then revisited the literature to examine these hypotheses.

What the literature suggests we should have done

There is no shortage of increasingly sophisticated theory, empirical evidence and learned commentary that could have guided our design decisions. Much of the thinking about interventions is relatively new; a state-of-the-art review of improvement published in the Lancet more than 15 years ago made no specific reference to the ways in which interventions morph when applied in practice.6 In contrast, more recent international guidance on designing, conducting and writing up improvement projects highlights the importance of describing how improvement interventions change.7 In brief, a number of themes relating to the design of effective interventions are emerging in the literature.

First, the importance of using theory (‘a chain of reasoning’) to optimise the design and effectiveness of interventions is highlighted.8 A commonsense rather than an overly academic approach to theory is being advocated as a way of reducing the risk of the ‘magical thinking’, which encourages improvers to use interventions that look superficially attractive but for which the mechanisms of action are unclear.8 ,9 Alongside the use of theory, there is a growing interest in the application of ‘design thinking’ as a strategy for ensuring that the problem has been clearly identified and a way of addressing complex problems in rapidly changing environments.10 Second, the importance of having an explicit method, such as the Institute for Healthcare Improvement's Model for improvement using Plan-Do-Study-Act cycles, is described and also understanding how to use the methods to their full potential.11 Third, there is a growing emphasis on the extent to which improvement interventions are social as well as technical in nature, and how their effectiveness is a consequence of a complex interaction between people, organisational structures and processes.12 ,13 Fourth, the literature describes how what people do (intervention), how they do it (implementation) and the wider environment (context) are interdependent and some people are suggesting that the traditional differentiation between this classic triad is no longer helpful.14

Fifth, there is a growing consensus that improvement efforts are being evaluated too early in their development and as a consequence are being judged unfairly as being ineffective.15 ,16 Instead, there are calls for interventions to be categorised according to the ‘degree of belief’ that they will work16 and how this belief becomes stronger as a project progresses. Interventions in the early ‘innovation’ phase should be evaluated using different methods from those in the later ‘testing’ or ‘spread’ phases. They may also have a different intent, for example, changes in behaviour may be seen as ‘success’ before measurable changes in outcome are achieved. Sixth, drawing on the expanding field of knowledge mobilisation,17 ,18 experts are calling for a more active process of co-design of improvement initiatives involving service users, practitioners and improvers, and also academics, with all of these stakeholders contributing to participatory models of evaluation.19

What we would do differently?

Having reviewed the literature, we came to the conclusion that each of the post hoc hypotheses were reasonable explanations for what in the field of improvement were not uncommon results, but were nevertheless disappointing. In future, we will put more effort into designing the intervention from the very start. We will think through the design issues in sufficient detail to not only persuade the funder of the project but also to persuade ourselves that it will work in practice. We will describe a programme theory in greater detail based on a better understanding of the contextual factors which could impact on the feasibility and effectiveness of the initiative, and we will use design thinking to rigorously frame the problem from the start.

We will work through in more detail and more systematically how to use current thinking about intervention design and its applicability to our project. We will build-in a similar or even longer preparatory period and will use that period to test and refine the intervention. We will not rely on a single senior care home manager to provide a practitioner view for the original proposal and we will seek a wide range of views from frontline staff and from care home residents in an inclusive and iterative way. We will not assume that the intervention can be implemented as described in the proposal and we will be more sensitive to the resource constraints under which the improvement team and the care homes are operating.

If we do all of this, the outcome will almost certainly be better.

Final reflections

Improvement initiatives are sometimes planned on the hard high ground, but they are put into effect in the swampy lowlands.20 As we are more than aware, frontline practice is messy. And as we have described in this paper, it is never possible to do things perfectly and good improvers are always learning. But as the improvement movement matures, we are getting to the stage where we could and should be doing better. It needs to be seen as a professional rather than an amateur sport. The importance of understanding that improvement interventions are not like drugs or medical devices, and that flexibility needs to be built into their design and delivery, is uncontestable. But is it no longer acceptable to use the need for flexibility as an excuse for a lack of thought and planning. As improvement becomes more rigorous, perhaps improvement practitioners will be able to plan their work with a higher degree of certainty, funders will be more confident that they know what they are buying and evaluators will be able to focus on whether and how ‘it’ works.

References

Footnotes

  • Twitter Follow Martin Marshall at @MarshallProf

  • Contributors MM led the development of the ideas presented in this paper. All authors contributed to the development and writing of the paper.

  • Funding Health Foundation.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles