Step-wise progress:
- First milestone: two consecutive CONVENIENCE samples above minimum acceptable fidelity
- Second milestone: two consecutive PURPOSIVE samples above minimum acceptable fidelity.
- Third milestone: a STABLE RUN CHART with a median fidelity above the minimum acceptable value.
- Fourth milestone: an EVALUATIVE study with STABLE CONTROL CHART (project-level measures).
A measure of fidelity of implementation is needed to interpret the result of an evaluative study. Suppose an evaluative study has a negative result. One possibility is that an ineffective change was correctly implemented. Another possibility is that a potentially effective change was not adequately implemented. A measure of fidelity will help determine which possibility is most likely.
Milestones in fidelity monitoring program.
Cycle | Sample | Result | Feedback | Action |
---|---|---|---|---|
1 | Convenience | 2/6=33% | Hard to understand Hard to find |
Make form more clear Make form easier to find |
2 | Convenience | 4/8=50% | Hard to understand | Make form more clear |
3 | Convenience | 7/10=70% | Hard to find | Make form easier to find |
4 | Convenience | 8/10=80% | Time-consuming | Make form shorter First milestone achieved; change to purposive sampling |
5 | Purposive | 0/4=0% | Unaware of the form | Improve awareness |
6 | Purposive | 8/10=80% | Hard to understand Time-consuming |
Make form more clear Make form shorter |
7 | Purposive | 9/10=90% | Unaware of the form | Improve awareness Second milestone achieved; prepare for run charts |
Measuring fidelity of implementation
- Choose fidelity measures based on change theory.
- PDSA-level fidelity measures
- Project-level fidelity measures
- Establish a minimum acceptable fidelity for each measure. Suggest a minimum acceptable fidelity of 70%. If fidelity is less than 70%, then the effect of any change will be attenuated, broader dissemination will be difficult to achieve, and the required sample size for evaluation may become prohibitively large (table 1). If fidelity is less than 70%, then a reasonable next step is to identify and ameliorate barriers to implementation, with ongoing monitoring of fidelity. The suggestion of a minimum acceptable fidelity of 70% is arbitrary.
- Establish a sampling strategy.
- obtain just enough data to guide next steps
- make full use of local subject matter expertise in selecting the most appropriate samples.
Data quality is important in small samples.
The five important steps to data quality are to (1) define the eligible sample, (2) establish exclusion criteria, (3) state the study period for each cycle/sample, (4) keep a reject log and (5) ensure complete data collection. Aim to enrol consecutive eligible patients. Random sampling is ideal but usually not practical.
[Small sampling] - Choose a practical sample size.
Assume that resources for measurement are constrained. Suppose that reality dictates that a maximum of 10 patients per cycle can be sampled. The minimum acceptable fidelity is 70%, so there must be at least 7/10 successes every cycle.If there are four failures in a cycle, then the cycle cannot achieve 7/10 successes. The cycle can be stopped, the failures can be studied qualitatively, and necessary adjustments can be made. - Create run charts.
A run chart is an efficient method for further enhancing a team's confidence in fidelity of implementation. Provost and Murray [3] recommend that a run chart needs 10 data points, with at least 10 observations per data point, and a consistent sampling approach. Do not use the data from step 4 for the run chart because the change, the implementation method and the sampling approach are all in flux during step 4. The third milestone is a STABLE RUN CHART with a median fidelity above the minimum acceptable value.Run charts cannot quantify the level of variation in the fidelity of implementation. Control charts, using larger samples and longer sampling periods, are needed to quantify the variation in fidelity of implementation. If ample data and resources are available, then control charts could be used earlier in a project. Otherwise, we suggest that a stable run chart with a median value above the minimum acceptable fidelity of implementation is an important milestone before undertaking additional dissemination or evaluation. - Enhance confidence that the change theory is correct. If implementation is acceptable, then there should be a trend to improvement in the other project-level measures, with no expectation of demonstrating a statistically significant difference.
- If there is an unexpected worsening, then the change theory, the implementation plan and the measurement methods should be re-examined.
-
Etchells E, Ho M, Shojania KG.
Value of small sample sizes in rapid-cycle quality improvement projects.
BMJ Quality & Safety 2016; 25(3): 202-206. [BMJ Qual Saf] -
Etchells E, Woodcock T.
Value of small sample sizes in rapid-cycle quality improvement projects 2:
assessing fidelity of implementation for improvement interventions.
BMJ Quality & Safety 2018; 27(1): 61-65. [BMJ Qual Saf] -
Provost LP, Murray SK.
The Health Care Data Guide: Learning from Data for Improvement
San Francisco, California: John Wiley and Son, 2011: 48-9. [www.amazon.com] -
Perla RJ, Provost LP, Murray SK.
The run chart: a simple analytical tool for learning from variation in healthcare processes.
BMJ Quality & Safety 2011; 20: 46-51. [BMJ Qual Saf]
Project-level Measures
Fidelity | Sample |
---|---|
100% | 100 |
95% | 110 |
90% | 123 |
85% | 139 |
80% | 156 |
75% | 179 |
70% | 204 |
60% | 278 |
50% | 400 |
40% | 625 |
30% | 1,111 |
Fidelity: Fidelity of implementation (%)
Sample: Sample size required for an evaluative study
Plan the sample size of the fourth milestone: an EVALUATIVE study. A sample size estimate requires an estimate of fidelity, and an estimate of the baseline values for the other project-level measures. For many projects, change is targeted at providers, but the impact of the change is measured on patients. For example, in the medication reconciliation example, the target of the change is physicians filling out the form, but the downstream impacts (medication errors and preventable adverse drug events) are measured on patients. In such cases, the correlation between the targets of change and downstream measures is needed for a sample size estimate (Table 2).
It is important to optimise fidelity of implementation prior to undertaking evaluative studies. Low fidelity of implementation leads to smaller effect sizes, which means that a larger sample is needed to detect the effects of the change (table 2). Suppose that the estimated sample size to detect an effect is 100 patients. This sample estimate assumes that the fidelity of implementation is 100%. If the fidelity of implementation is only 70%, then the required sample size to detect the same effect doubles to 204. If the fidelity of implementation is 70%, and only 100 patients are enrolled, then the study will yield a negative result. The corollary is that small improvements in fidelity can reduce the required sample size and increase the chance of demonstrating an effect. Improving the fidelity of implementation from 70% to 80% reduces the required sample size from 204 to 156.