Hotspotting Lessons—Part 2

Hotspotting Lessons—Part 2

Hotspotting.png

Part 1 of this post described the context and conclusions of a randomized trial of the Camden Coalition’s hotspotting intervention.   I paid particular attention to regression to the mean as an important feature of the study (“Health Care Hotspotting — A Randomized, Controlled Trial”, New England Journal of Medicine 382;2 January 9, 2020.)

Here’s the paper’s primary conclusion again: 

“In this randomized, controlled trial involving patients with very high use of health care services, readmission rates were not lower among patients randomly assigned to the Coalition’s program than among those who received usual care.” (p. 152)

Trials in Practice:  Defining Treatment and Control Interventions

The hotspotting trial illustrates issues in designing, conducting and interpreting a randomized trial where the treatment involves a social program. 

Randomization averages over all possible confounding factors that might cause outcomes to differ between treatment and control groups.  Randomization enables a causal interpretation of observed differences between those groups.

Randomization does not address one feature of the hotspotting trial, the difficulty the authors describe in defining the treatment and control interventions.   Our mental models of experimentation in physical or agricultural systems obscure the challenge of defining interventions in social settings.    

We can’t easily apply a study’s results to other settings unless the treatment and control conditions are clearly identified; this extensibility challenge holds for both randomized and observational studies.

In the hotspotting trial, it seems impossible to characterize the treatment received by all patients in the treatment group as a specific intervention.  The authors summarize treatment program metrics for three dimensions of care:  encounters, length of intervention, and timing of service.  There is substantial variation in the program metrics across all three dimensions.  (See Table 2, p. 157).

For the control group, the situation is more extreme.  The authors could not characterize the control condition at all:

“The control group received usual postdischarge care, which may have included home health care services or other forms of outreach.  We were unable to measure the postdischarge services received by the control group.” (p.154)

Vague characterization of interventions and interpretation of the primary conclusion

The imprecise characterization of treatment interventions matches real-world variation, not a laboratory definition that could never be seen in practice.   Does the imprecise characterization of interventions in 2014-2016 undermine the conclusion that the treatment has no benefit in terms of 180-day hospitalizations for the patients studied?

The inability of the authors to characterize the control condition is more problematic than the observed variation in the treatment condition.   If we posit that care has just two dimensions, frequency of interactions and intensity of support, then consider two cases.

Hotspotting2.png

In Case 1, the control group gets interventions that are on average lower on both dimensions.  No difference in 180-day hospitalization rate means that the additional effort to deliver the level of care in the treatment group doesn’t yield any benefit in this rate.  When I first read the paper, I had Case 1 in mind.

In Case 2, there is substantial overlap in the treatment and control interventions as experienced by patients, which may represent actual conditions in Camden.  There’s still extra effort to care for treatment group patients; I reach the same conclusion as in Case 1.

In the limiting case of complete overlap between treatment and control interventions, I would expect no difference in group 180-day hospitalization rates or any other outcome.   Patients experience essentially the same care in both groups and the experiment is non-informative about effects of intervention.  

You may agree with me that Case 1 or Case 2 seem more reasonable descriptions of patient care in the study than the limiting case.   Nevertheless, note that the authors’ summary of the trial does not exclude the limiting case; any additional assumptions about the difference between treatment and control interventions are external to the trial.

More to Learn

After making a compelling case for the presence of regression to the mean in 180-day hospitalizations, the authors discuss the limitations of their study:

“[The study] was powered to detect whether this care-transition program could achieve reductions in readmissions as compared with similar programs focused on patients with less complex health care needs. However, the trial was not powered to detect smaller reductions that could be clinically meaningful, nor was it powered to analyze effects within specific subgroups, where there could be differential effects. The data did not permit evaluation of potential nontangible benefits such as improved relationships with providers. Nor did the data allow comparison of outpatient care for the treatment and control groups. Usual care in Camden was evolving during the trial period, multiple other care-management programs were starting, and the Coalition was leading a citywide campaign to connect patients with primary care within 7 days after hospital discharge.” (p. 160).

Randomized control trials in social settings demand substantial investment and are difficult to do; hence, RCTs are relatively rare.  Could observational studies complement the hotspotting trial? 

In Part 3 of this post, I will sketch causal diagrams, look at Pearl’s ‘do-calculus’ to highlight the regression to the mean issue and indicate data that could help us understand benefits of hotspotting.

Hotspotting Lessons—Part 3

Hotspotting Lessons—Part 3

Hotspotting Lessons—Part 1

Hotspotting Lessons—Part 1