Level 4: Retrospective Comparison
We help you figure out how to learn from your data—what is working and not.
The easiest way to learn from data is to leverage a randomized, double-blind, placebo-controlled trial. These trials reduce the risk of misattributing cause (e.g., some patients were sicker), focus attention on the critical few things we want to know, and increase the trust required for action. The statistical techniques are covered in an introductory course.
The problem is that these trials are typically not available or not applicable to your specific circumstances. For example, we have no such trials for determining minimum surgery cases to maintain competency and good outcomes. And your results may differ from a trial that (a) had careful selection criteria (b) was done at an academic medical center (c) was located in a different region from your organization, or (d) included follow-up different from standard practice.
Our Healthcare.AI Expert Services bring together years of experience, rigorous methods, and advanced augmented intelligence (AI) to ensure that you are drawing the best conclusions from your data given your goals, preferences, circumstances, and assets. Doing so saves time and reduces organizational friction to change.
1. Statistical Modeling
Healthcare organizations are large and complex. System change is hard and expends financial and human capital. Rigorous, reproducible, and transparent analyses that are easy to understand ensure that this expense drives value.
Strategic decisions for organizations are more complex than those for individual patients. We often make strategic organizational decisions in the absence of methods leveraged to make individual patient decisions.
We embed a set of statistical and machine learning techniques into our retrospective analyses, select the simplest approach that yields reasonable results, and present action-oriented conclusions as simply as possible.
Organizations maximize the utility of their retrospective data assets to drive focus, resource allocation, and accountable change.
Expand for More Details
Collapse Details
2. Group Differences
Healthcare is a comparative and competitive activity, whether we are trying to compare providers to each other or analyze data over time. Drawing correct conclusions is a prerequisite for achieving appropriate focus and change. Correct conclusions and subsequent change depend on comparability (perceiving a level playing field).
Analytics of unadjusted group differences can yield quick reports and motivate those who like the results. Those who do not like the results will immediately point to the possible lack of comparability and resist change.
We routinely and proactively address the issue of fair comparisons in retrospective analyses. We rapidly adjust comparisons with expert input to ensure that participants perceive a level playing field.
Organizations more effectively drive focus and conclusions about improvement by leverage existing internal and external data.
Expand for More Details
Collapse Details
3. Causation/Confounding
Drawing the wrong conclusions about causation yields waste or mistrust in the best case and does harm in the worst. Dealing with confounding increases the likelihood that the conclusions are correct.
Retrospective analyses frequently do not include techniques that detect and deal with confounding. Failure to do so yields incorrect conclusions and subsequent action.
We routinely and efficiently incorporate confounder detection and a set of methods to deal with confounding in our retrospective analytics. The techniques we use incorporate both automated search and expert-driven insight.
Organizations more effectively leverage existing data to draw correct conclusions in exploring opportunities or effecting change.
Expand for More Details
Collapse Details
4. Sensitivity Analysis
Healthcare organizations frequently struggle with selecting the best way to measure differences and improvement. Sensitivity analysis not only prevents this struggle but also results in better improvement faster.
Typical analytics efforts force organizations to choose an illusory “best” way to measure results.
We use sensitivity analyses that leverage multiple measures and methods to achieve convergence of evidence where it exists and foster specific additional learning where required.
Organizations initiate improvement, draw conclusions, and identify next steps more quickly and accurately.