Level 3: Optimizing Predictive Models
We help you validate predictive models and optimize their impact within your organization.
While Level 2, Choosing/Building Predictive Models Services, helps you choose, build, or optimize from hundreds of pre-defined predictive models, developing the model is only the first part of a successful predictive analytics project. Many predictive analytics efforts fall short by neglecting the second phase. There is still important work to make sure that the model is optimized and fine-tuned to get results for your organization.
Level 3, Optimizing Predictive Models, helps you validate predictive models and optimize their impact within your organization. Our approach ensures that you get the best predictive model with the greatest likelihood of demonstrating results.
We have three Predictive Model Understanding solutions to help you optimize your results.
Our experts help you optimize with Predictive Model Understanding solutions designed for eventual self-service. Our experts first train your teams until they master these tools and processes, and then deliver these three products to optimize use.
1. Functional Model Understanding
The people who understand how predictive models should work typically don’t understand how these models actually work. In the best case, this results in champions being unable to explain why stakeholders should trust the model. In the worst case, the model is unhelpful and should not be trusted. A typical example of a model that is actually unhelpful occurs when the model uses information available at the time the model is trained but not once the model is deployed (e.g., hospital discharge diagnosis trying to predict length of stay).
Much “explainable AI” has been focused on data scientists describing predictive models among themselves. This is necessary but does not help drive the trust and understanding of experts and leaders.
Our tools provide visual calculators that automatically ingest predictive model metadata and allow clinical and operational leaders to interact directly with the predictive model in a safe space (e.g., how would predictions change if the patient’s blood pressure drops after fluid administration or the prior account balance is lower?).
Clinical and operational leaders are effective in achieving organizational change because they understand how the predictive models function and can convey this information with confidence to those they lead.
Expand for More Details
2. Operational Model Understanding
Most healthcare predictive models require humans to take action for the models to have an impact. Careful consideration is required to set thresholds for action that not only align with objectives and aims but also consider available resources.
Aggregated performance statistics for predictive model accuracy (e.g., AUROC) can be useful for choosing between predictive models, but operational trade-off analysis is required to make and optimize deployment decisions for action.
Our tools provide visual trade-off analyses to help leaders choose thresholds for action in a matter of minutes. The trade-offs are performed in the context of organizational value judgments and resources common across organizations that cannot be generalized across them.
Business leaders make better decisions about resource allocation and can communicate clear, measurable targets for change, especially for process changes and leading indicators.
Expand for More Details
3. Contextual Model Understanding
Predictive models fit within the context of a decision-making process that includes elements such as patient mix, staff workflow, and available data. The context for a single septic patient is very different in emergency department triage than the day before hospital discharge. Leaders of change incorporating predictive models need to evaluate the models in this context.
Most predictive models fail to spread effectively across organizations and time not because the model is “bad,” but because changes in operational context are not considered. The problem grows as predictive models become more precise.
Our tools provide a way to evaluate the inputs and outputs of a predictive model in the broader context surrounding the model itself. Specifically, we work with clinical and operational leaders to ensure that information relevant to the use and utility of the model are represented along with the model. By analogy, we provide a broader weather radar map or instant replay capability.
Business leaders yield better results with predictive models by making better decisions about how the predictive model they are deploying will operate in the broader environmental context in which it will operate and effect change.