✂
The intended output of PDSA is learning and informed action.
- A successful PDSA process:
- may enable users to achieve their QI goals more efficiently or to reach QI goals they would otherwise not have achieved.
- may save wasted effort by revealing QI goals that cannot be achieved under realistic constraints or may identify new problems to tackle instead of the originally identified issue.
- does NOT promise that users will achieve their desired outcome (quality improvement).
In healthcare, PDSA training often overemphasises the conceptual simplicity of the framework. This frequently leads to:
- people leap into PDSA with insufficient prior investigation and framing of the problem
- delegate management of the process to frontline staff who have little influence over broader systemic concerns that need to be addressed
- and provide these staff with little support to overcome the obstacles and barriers they face
The resources, skills and expertise required to apply PDSA in the real world are often significantly underestimated, leading to projects that are destined to fail.
PDSA stages | Key failure modes | Potential consequence |
---|---|---|
Investigation and problem framing
Define the problem; determine its causes/contributing factors; identify stakeholders; set the criteria for success |
Poor definition of the problem and its causes/contributing factors | Time, money and goodwill may be wasted trying to solve the wrong problem or solve it in the wrong way |
Failure to clearly define the criteria for success and how performance will be measured | A poor match between the design of the intervention and its intended impact; inability to assess success during "study" phase | |
Failure to identify key stakeholders | Important knowledge may be left out of the planning process | |
Plan
Design an intervention and data collection plan; specify how the intervention will be implemented (Do), evaluated (Study) and sustained (if successful) |
No theory of change/programme theory connecting the intervention to its intended outcomes | Poorly targeted interventions that may be inefficient or may fail altogether. Poor buy-in due to a perceived lack of legitimacy |
Planned intervention, implementation plan and study protocol that are not in proportion to one another and the problem to be solved | Underinvestment leading to projects that do not achieve their goals or that cannot be proven to have achieved their goals; Overinvestment leading to wasted resources | |
Designing a data collection and analysis plan that is incapable of providing the required answers | Impossible to know if the intervention was effective; excessive PDSA cycles required; aggravation among frontline staff that the administrative burden of data collection was wasted | |
Not consulting key stakeholders during the planning stage | Proceeding with an intervention that is predictably doomed to fail; disengagement among frontline staff | |
Not planning for the "who, what, where, when, and how" of implementation (the "do" phase) | Poor understanding of resource requirements and cost-effectiveness; poor execution of the "do" and "study" phases | |
Adopting weak interventions (eg, administrative controls, such as training and policies) without considering more robust options | Interventions that do not achieve their goals or do not sustain them | |
Not assessing cultural and structural barriers/facilitators related to the intervention | "Fish out of water" interventions put in place without attention to the broader changes required to make them successful; systemic issues not tackled and only superficial change attempts made | |
Failure to plan for how the intervention will be sustained in practice, if successful1 | Performance reverts to previous standards, staff frustrated with unsuccessful change effort and disengage from future attempts | |
Failure to consider the intervention’s failure modes and potential side effects (positive and negative) | Interventions that are designed to fail or that create more problems than they solve; failure to select the most cost-effective solutions |
PDSA stages | Key failure modes | Potential consequence |
---|---|---|
Do
Implement the plan (including both the QI intervention and the data collection plan) |
Failure to implement the QI intervention as intended | Impossible to learn whether the planned QI intervention works as expected; wasted effort; disillusionment among staff involved with intervention design |
Failure to collect the data as intended | Undercuts the Study phase; may be difficult or impossible to tell whether the intervention worked as expected; difficult or impossible to learn about the effectiveness of the original data collection plan | |
Failure to capture unanticipated learning | Missed learning opportunities (especially for qualitative learning about how and why the intervention did/did not work); project failure; unnecessary PDSA cycles | |
Failure to abandon the Do phase despite manifest failure or severe negative side effects | Wasted effort; excessive disruption; adverse outcomes from side effects | |
Study
Analyse data and compare results to the definition of success; distil and communicate what has been learned from the formal data analysis and unanticipated learning |
Failure to conduct a study or inappropriate failure to follow the study plan | No/limited opportunity to learn whether the intervention works as intended; potential for biased and misleading results |
Failure to communicate what has been learned | Loss of stakeholder engagement; reinventing the same broken wheel in the service of other QI projects; loss of institutional knowledge if there is turnover among project leaders | |
Act
Based on what has been learned, either:
|
Failure to engage in "double loop learning" that questions the goals of the project in light of what has been learned | Wasted effort continuing to work on the wrong problem, or one that cannot realistically be solved; Excessive PDSA cycles spent trying to achieve a goal that is set too high, when a more realistic goal would deliver real improvement |
Moving too quickly from small-scale tests of change to full-scale implementation and sustainment | Failure to uncover barriers to broader use prior to implementation; project failure; disruption associated with deimplementation; wasted resources/goodwill |
References
- Reed JE, Card AJ. The problem with Plan-Do-Study-Act cycles qualitysafety.bmj.com BMJ Qual Saf 2016; 25: 147-152.
- Leis JA, Shojania KG. A primer on PDSA: executing plan-do-study-act cycles in practice, not just in name qualitysafety.bmj.com BMJ Qual Saf 2017; 26: 572-577.