Dr Kate Lyle is a Research Fellow in Health Sciences at the University of Southampton and Professor Catherine Pope (right) is Deputy Director of NIHR CLAHRC Wessex
As academics, practitioners, and users of healthcare services we are all used to hearing about examples of successful interventions that have improved health services and care. Indeed one of the core aims of the NIHR CLARHC programme is to improve patient outcomes locally and across the wider NHS. Here in Wessex we have been working hard over the past 5 years to do just that, spreading best practice and evidence based research throughout the NHS.
But what about the things that don’t work? Attempts at service improvement or innovation that went nowhere? Often these are the things we don’t hear about. Yet, arguably there is as much to learn from our failures, as there is from successful innovations.
Learning from failure is a well-established practice in other industries, notably aviation, a sector which constantly strives to use learning from mistakes and critical incidents to improve safety. There is also a growing movement to publicise and learn from clinical trials, so that our evidence base includes all health care research experiments, even the ones where a drug or intervention is shown to have no effect.
Researchers can do more to learn from failures in healthcare research. This means sharing our unsuccessful attempts to change practice as well as the positive outcomes.
Here we examine our own experiences of trying to improve patient care and safety in a hospital setting using a patient administered checklist. While we had high hopes for this intervention, as you will see, it did not change practice in the way we expected. Nonetheless we learnt a lot about these kinds of interventions that will equip us better to approach quality improvement in the future.
The Patient-administered checklist project
A small group of clinicians and sociologists embarked on a project, funded by the Health Foundation to develop and implement a patient-administered checklist. The project was grounded in practice and previous research. We were inspired by the work of Atul Gawande, and his famous book the Checklist Manifesto, which encourages clinicians to use checklists to improve the safety of their work. We were also aware that the evidence base suggested that there could be positive effects from using checklists, but that this varied across different healthcare settings. Our checklist was conceived as a tool to address both clinicians’ concerns about safety and errors in practice, and to improve patients’ experience of hospital care. It contained a series of questions about patient care and the things that should happen during their stay. We co-designed the checklist questions and format working with patients and staff to try to ensure that it was relevant and accessible. We hoped that it would facilitate better communication between patients and staff, and provide a medium for patients to raise any concerns they might have and a prompt for staff to address these before the patient was discharged.
We tested the checklist in two different hospital areas (children’s inpatients and the emergency department) for over six weeks, and collected lots of information in our process evaluation, about how the checklist was used, and what staff and patients thought about the idea. When we analysed our results we realised that the checklist just didn’t work in practice. So let’s look at what went wrong.
What went wrong?
There were over 2000 eligible patients seen in the emergency department and a smaller number, 245 of eligible inpatients to the children’s wards, but we only managed to ensure that between 15-40% of the checklists were given out. The problem with implementing the checklist was not resistance to improvement or change. Staff in both departments were convinced that the care and services provided could and should be improved. But, as we discovered, the checklist was not seen as useful in their settings. There were three main reasons for this:
They didn’t ‘get’ the checklist
By talking to staff about their experiences of working with the checklist we learned that its purpose was not well understood. For example, many nurses saw the checklist as an auditing tool, thinking that the data collected would be analysed by somebody else at a later date. They did not value or use the checklist as a prompt for ‘real-time feedback’ and action. Linked to this, some nurses perceived the tool as a “way of checking up on us” and felt that it had been imposed on them by management and senior doctors, who perhaps did not understand how nurses work.
Impact of other interventions
At any one time there are a huge number of changes being made to health services and how care is delivered. These can be linked to government policy, new advances in care and science, or local strategic decisions. This means that research and improvement projects often take place in crowded change environments. Our checklist had to compete with a number of other interventions and practice changes. There was a constant flow of new interventions being tested, and lots of competing priorities for staff and patients’ time. This had a significant impact on the way staff viewed the intervention: some staff said that the checklist was ‘just another piece of paper that we have to give out’.
Relevance to patients
Staff struggled to see how the checklist met the needs of their patients. They offered various explanations as to why ‘their patients’ should be exempt from using the tool. Some argued that patients were not willing to complete the forms, that it was too much work for people who were ill, or that they did not need prompts to feedback concerns about care. This was corroborated in our interviews with some patients, who also said that they did not want to complete the checklist because it felt like they were ‘checking-up’ on the nurses.
In practice the checklist was not acceptable to patients or staff, and after the six-week trial we concluded that it should not be used in its current form. However, despite this, there are some valuable lessons from our experience.
What can we learn from this?
There are two insights that may be valuable for others involved in service or quality improvement:
- Process evaluations are vital to understand how an intervention works in practice
Without the process evaluation of the six-week trial we would have a very different understanding of our checklist project. Had we only counted the number of completed forms we could have thought that the checklist was quite successful, there were few negative comments and patients had ticked boxes to indicate that all their concerns were addressed. By observing the care practices, and the ways the forms were completed we could see that they were often hastily filled in after the consultation, and not used to prompt feedback during patient–staff interactions. Often the form was placed in a bag or out of reach during consultations, or given to the patient just before they left. When we interviewed patients they mentioned that they felt inhibited in raising concerns with staff. Similarly, when we spoke to staff we were able to build up a better picture of the relationships between different professions, and in particular the need for nurses to feel ownership of changes. From this we concluded that although the logic behind the checklist was sound, and the idea was positively received by patients and staff during the co-design phase, in practice they did not use them as planned.
- Awareness of the impact of competing interventions
We had not fully appreciated the sheer number of other interventions being implemented at the same time as our checklist. These included national and local data collection as well as new practices. At one point there was a CQC inspection in one of our settings and all the checklist forms were ‘tidied away’ and so none were handed out during the inspection. Other interventions competed for the attention and commitment of staff and lead to ‘intervention fatigue’. Before attempting to implement an intervention in similar contexts it is worth attempting to find out about other changes and planned projects to assess where goals, resources, and data collection overlap or compete. This information could be used to adjust the roll out and/or timing of a new intervention.
While the checklist itself was not a success, there was important learning in our chosen settings. Our observations in the children’s wards led to additional work by trainee doctors to improve the admissions process and paperwork. We were also able to explain nurses concerns about top down improvements and practice changes imposed by doctors or managers, and use this to encourage closer collaborative working. As researchers we learnt a valuable lesson that great ideas still need to be tested in the chosen setting, and we encourage others to do the same. We hope that researchers and quality improvers might share more details about projects that don’t work, after all, to use the words of Henry Ford, “the only real mistake is the one from which we learn nothing”.
Project team: Dr Mike Clancy, Dr Kate Pryde, Professor Catherine Pope, Dr Kate Lyle, Dr Ursula Rolfe, Dr Sarah Robinson, Cheryl Davis, Marion Lynch, Prof Steve Goodacre, Professor Rob Crouch.
The Checklist project was funded by a Health Foundation Innovating for Improvement grant. We are grateful to all the patients and staff at the participating hospitals.