ML #12 - Deep Dive into Heart Failure Readmissions with Joe Smith

Hosted by Levi Thatcher and Mike Mastanduno

May 11, 2017 - 20min

Share this content:

Thank you for all of the positive feedback on our real world example on length of stay (episode #6)! This week we'll feature Joe Smith, Health Catalyst Data Architect and model builder. Joe will walk us through the use case, creation, and deployment of his model designed to predict heart failure readmissions. Join Levi, Mike, and Joe for another exciting week of healthcare.ai!

Full Broadcast Transcript


Transcript

Levi: Hi, everybody. Thanks for joining us. Welcome to the hands-on machine learning broadcast. We’re excited to have some guest speakers today. But I’d like to welcome you to Mike Mastanduno.

Mike: Hey, guys. Thanks for watching. Levi, great to be here. We’ve got a great guest here, Joe Smith.

Levi: Joe, say hello.

Joe: Hello. Hello everybody.

Levi: And then we’ll have Kristine Lundeen joining us today. Christine, do you want to say hello?

Mike: Christine, can you say hello?

Christine: Hello. Sorry, I—

Mike: Where are you based, Christine?

Christine: I’m in Washington State, in the Tacoma area.

Mike: Great. Well, we’ll be talking about a Washington State-based hospital today and a heart failure readmissions.

But first, we’ve got a short little intro and then we’ll go into the mailbag, and in the news. So we’ve got a quick bit of both.

If you’re watching, you can log into YouTube to make sure you chat. We want to keep this interactive so you can keep your questions flowing. Levi will make sure that he is going to help out and make sure that we answer them.

And then, go to the website healthcare.ai, and up in the upper right you’ll see, you can subscribe to the broadcast, you can subscribe to the blog, get updates, join the community, join the Slack Channel. We can keep the discussion going all week.

So, thanks for the great participation so far.

Levi, what’s in the mailbag?

Levi: Yeah. Thanks for joining guys.

And as we go through this like Mike said, please keep the interaction coming. Hop in the chat and let us know your thoughts. We’re excited to talk about something super practical today and want your feedback.

In the mailbag, today, we have a couple of great questions that coming from the community this week. First off, someone asks, “We have knowledge of the EMR data. We get that. We understand that this is databases and that we can access the tables but no experience with AI or R, so how do you get started?” Fantastic question. A lot of folks at Health Catalyst and on the community ask that all the time.

Mike: Yeah. It feels like that’s kind of the point healthcare is at, you know?

Levi: Yeah. We get SQL. We can do some of that. How do we start with machine learning? How do we start on R?

And, as a simple plug, we would encourage you to go to healthcare.ai and click on the packages link. So at a high level of this, if it’s the first time you’re joining us, what these packages do is they help you get started with machine learning on your data. And Joe—

Joe: Yeah.

Levi: –will be talking to us a little bit about how he did the same thing on his data and built an awesome heart failure readmissions model.

So the idea is check out that packages link. It will point you to R or Python. R is a little bit easier for newbies so click on that one and you’ll see some fantastic docs or documentation that will tell you exactly how to get started working with your data which is what this is all about – to be able to do stuff with the data you work with day to day.

Joe: Yeah.

Levi: So fantastic question.

Mike: So that’s really good. And then I’ve been hammering the use case, use case, use case in every interaction we’ve had, so you have to make sure that the question you’re going to be answering is going to be a relevant one and it’s going to solve a problem.

Levi: Yeah.

Mike: So before you dive into the ML, make sure that there’s a reason you’d want to do it.

Joe: One thing that I have found when I started this journey/process, when I was introduced to healthcare.ai, I had zero experience in R. I’d never opened R. I think I went to a class maybe. Before, I did some in school a little bit but nothing really practical. And so, I was a little nervous going in just, “Hey, how complicated will this be?” But I logged onto the site and it literally is a step by step. They have like example code, like copy-paste. I’m like, “Okay.“ So I copy pasted. And it’s just amazing how much plug and play it really is. And that’s something that was really nice for me.

Mike: It’s amazing. And I swear we didn’t pay him for that.

Levi: It was ad hoc.

That’s what we like to do, copy-paste. Like, practical examples. So here’s something you can put on your machine today and do something cool with. So that’s the idea.

Joe: Mm-hmm.

Levi: Thanks Joe.

Thanks, Mike, for that.

So great questions. Keep them coming, both from the Slack Channel and the chats, via e-mail. We love them.

Second question is from Jonas, “Is it possible to generate a prediction with a confidence attached to it?” Great question.

For example, I want to know if I should flag a positive case for manual review or not. So let’s say we’re generating predictions on healthcare.ai, which ones do we need to manually review? And, can you even tell that? Mike, you had some thoughts on this?

Mike: Yeah. I did. So your predictions, we use this “generate AUC” function at healthcare.ai and you can get a nice visual representation of what your prediction might look like. And you can choose a cut off using that generate AUC function. You can choose a cutoff that will say, “Oh, 95% of predictions above this cutoff are correct.” So maybe you could use that group as a no manual review needed. And then going down and maybe choose a lower cutoff and say, “Oh man, only 60% of predictions in this group indeed came true.” So maybe that’s the group that doesn’t need the manual review. And then lower than that, it’s not going to be worth our time.

But you can’t really get a confidence interval attached to a probability.

Levi: Yeah, that’s tricky.

Mike: It’s tricky but–

Levi: But not impossible.

Mike: Yeah, but the AUC thing, that’s a way you can be more confident in whether or not a certain data point needs a manual review.

Levi: Yeah. That’s a great point.

So, for example, you could make your high-risk patients at a little bit more less exclusive, I guess. You’re throwing more people in there to make sure that you’re catching folks that are highest risk and then maybe a little bit lower risk than that.

Mike: Yeah.

Levi: That’s an awesome solution. Great solution.

Okay, so two mailbag questions. That was from Jonas.

Keep them coming. We love the interaction. And we’ll keep doing this each week.

And then, if you’re signed into the chat there, if you have any comments, we’ll bring it up in the broadcast so you can chat with Joe and Kristine.

Joe: Yes.

Levi: So we had some of the news articles? We want to talk about those.

Mike: Yeah. There’s a couple, really quick ones. I didn’t want to spend too much time on but I thought it would be interesting to say that Facebook AI Research just released a new machine translation neural network architecture that’s based on a convolutional neural net. And, traditionally, those are based on recurrent neural nets which are just different architectures of neural networks and–

Well, it’s very technical and nerdy. But—

Joe: Yeah. The one that just came out, I don’t think I understood how—

Mike: It’s about ten times faster.

Joe: I don’t think I understood half of the words you just said so.

Mike: Ten times faster, just remember that. And I’m sure it’s not quite as good just yet but it’s a nice picture of things to come. We’ll have a link go out in the chat, in the notes. You can read up on that.

And then the other article we had was on FMRI which is functional magnetic resonance brain imaging. And—

Kristine: Okay.

Mike: People are using neural nets to reconstruct what patients were actually seeing based on their brain activity.

Levi: Whoa!

Joe: That’s cool.

Mike: People in an MRI were being shown letters and researchers were looking at the activation in their brain using the MRI and able to tell what letters that they were viewing.

Joe: Are you serious?

Mike: Yup, based on the brain activity.

Joe: It’s a little—

Mike: So, I don’t want to go into too much detail but that’s really cool, too.

Joe: That’s super cool.

Levi: That’s amazing. So based on like where it was and the signals—

Mike: Yeah. Based on the electrical signals traveling around the visual cortex—

Joe: Crazy.

Mike: –they were able to see—

Joe: Cool. Super cool.

Mike: To see what letter—is it a B or is it a D?

Joe: Oh my gosh.

Mike: Which is pretty cool.

Levi: So they’re basically reading their minds?

Mike: Yeah. Yeah. exactly.

Levi: Wow.

Joe: Mind reading– 

Mike: Mind reading technology.

Joe: Yeah.

Mike: This is the beginning guys.

Joe: That doesn’t have any weird [inaudible 00: 08: 00]

Levi: All right. Well I’m not worried.

Joe: Not one bit. Just [inaudible 00: 08: 08] in the news, this week.

Levi: Yeah. That’s fantastic.

Joe: That’s cool.

Levi: Awesome.

Mike: So let’s move on to a main event.

Levi: Yeah. Let’s get into it. I’m excited. Yeah.

Mike: So we’ve got Joe Smith with us and Kristine Lundeen on the phone. We’re going to be talking about hear failure readmissions model.

Joe: Yes.

Mike: And Kristine, maybe you could start us of by just chatting about kind of what’s the scope of heart failure readmissions or of heart failure and what’s the clinical problem that we’re trying to solve?

Kristine: Thanks.

So the clinical problem we’re trying to solve is the issue of our heart failure readmission, as Mike said. And we want to know, as soon as the patient comes into the hospital, what is their risk for readmission? 30 day [inaudible 00: 08: 58] after they leave? So we really wanted to find as many variables as we could to help predict that event as soon in the [inaudible 00: 09: 13] so that providers would, first of all, know who their heart failure patients are while they’re in the hospital. That’s often the first biggest challenge. It’s knowing who they are. If you do, then you can obviously intervene with tested interventions.

So the second problem is then knowing who’s going to come back. And so we, Joe and I, worked closely together to identify those variables and to answer that scope of that question really. Who’s heart failure? Who’s coming back?

Mike: Yup.

Okay. So it’s kind of a two-part question. You said who is heart failure?

Joe: Yeah.

Mike: So that in itself is a problem.

Joe: Yeah.

Mike: You have to define the cohort really well.

Joe: Right. Yeah.

Who’s in the hospital and like best case scenario with this– Day 1, a patient enters the hospital. Day 2, we have a prediction saying, “Hey, you have X percentage— or “Hey, you’re highly likely to be readmitted within 30 days.” And so—

Mike: So, in the case of a kind of a high-use or a high probability of being readmitted patient, how does the clinical staff interact with that or, in an ideal world, how would the clinical staff adjust their methods?

Joe: Yeah.

Mike: Maybe that’s a better question for Kristine, I don’t know.

Joe: Sure.

Kristine, do you want to take that?

Kristine: I think

Yeah. I think our first step, since this is new to our hospital, the first step is just to expose that prediction score to the providers to make them aware that, “Hey, we’re able to identify the patient and even provide you with a prediction so that you can focus your clinical interventions on those patients.” The clinical interventions are obviously going to be patient-specific. But we know, in terms of follow up post discharge, there are a number of interventions recommended in current literature that we can apply to the patient. So, again, first step is just getting it in front of the providers.

Joe: Absolutely.

Mike: Absolutely, great. Great.

Joe: Yeah. I had just a quick preliminary meeting, just like a quick 15-minute meeting with just the head of the Heart Failure at the Washington-based hospital. And he said exactly that to Kristine, “We want to get it in front of these providers. We want to get it in front of them as soon as possible.” Possibly, even like a closed loop thing instead of– kind of limiting that delay as much as possible so we can get that predictive score out to providers so they can actually take action on that. I mean, we’re literally going to try to hand them, “Hey, this is what’s going to happen in the future.” And every provider is worried about their patient – “Hey, are they going to be re-admitted or not?” And they would want to prevent that. It’s all about improving healthcare so.

Kristine: Exactly.

Mike: So how many patients does this kind of model see, or affect or, have the potential to impact?

Joe: Gosh. So when we were developing it, we were looking on the last few years’ worth of data. I think three years’ worth of data. And there was about 69,000 patients—

Mike: Okay.

Joe: That fell within our criteria. There’s a lot of criteria that CMS deems as, yes, this patient is “eligible” to be — they readmitted, it will–.  Yeah, they fall within those criteria. So 69,000 patients over the last three years is what we really looked at to see, “Hey, let’s train our model on these patients and see, going forward, maybe at a given time we have X number of active patients we can then get a score on those guys.”

Mike: So that’s kind of a nice segue into the data, actually. So how many tables did this come from, roughly?

Joe: Yeah.

Mike: You had to do all the data wrangling to get it ready for machine learning.

Joe: Yeah.

Mike: Do you think it was like five tables, ten tables, 100 tables?

Joe: [inaudible 00: 13: 46] table. So if you think of going into a hospital– stepping into the hospital, just imagining the data, I mean, available.

Mike: Yeah.

Joe: I mean, they have everything from like signing in, or admit to—

Mike: Yeah.

Joe: I mean, everything, right? I don’t know if we have, you know, going getting a drink at a drinking fountain, maybe not that. But you can imagine the immense amount of data.

And then to think about what things could possibly tell us, “Yes, this patient is going to be re-admitted within 30 days.” You have to think very—that’s like wide margin of things.

Mike: Yeah.

Joe: I mean—

Mike: That’s a lot of things.

Joe: So gosh, just the tables that we’re dealing with, I would say it’s pushing– it’s definitely more than 100 tables—

Mike: Wow, okay.

Joe: –that we’re looking at. The ones that we brought in and finally had actually like had a structured model around it was around 88 tables.

Mike: 88 tables? That’s a really specific answer so I’m really glad. I think you prepared for that.

Joe: 88.0 tables. Is that more accurate?

Mike: Awesome.

And so, is the data all— is it all there in the table? Or did you have to use columns from one table and columns from another table—

Joe: No.

Mike: –to kind of like synthesize new information?

Joe: Yeah. I mean, definitely, Kristine and I spent a lot of time working to try to fit this data into these tables. I mean, you’re thinking– I mean, 100— I don’t know. Kristine, what do you think? More than 100 tables. Probably— I don’t know how many tables exactly. But there’s so many tables that we had to, you know, take this from here, over here, over here, and put it into then one summary table, basically, to then feed into our model. And that’s the thing, because you want to take something really kind of complex and then put it into something really, really simple so that the model can then take that so.

Mike: So maybe you could talk about the days in acute care variable that you made?

Joe: Oh yeah. So we had quite a few. So we ended up looking at— sorry, I’m staring at my screen. You probably can’t see this. But we looked at about 88 specific variables. We probably had about 300 or 400 variables that we could look at. But then, you have to think, “Patient enters Day 1, what info do you have on this patient,” right?

Mike: Mm-hmm.

Joe: I mean, you can have retrospective stuff all over the place but that’s not going to help Day 1—

Levi: Yeah.

Joe: Because what do we have?

Mike: So like discharge information, it’s not going to help you.

Joe: Right, yeah. So—

Mike: You don’t have it.

Joe: So you don’t have discharge info. So you’re like, “Okay.” So, at first, we were starting to go, “Okay, well, what do we have?” We have BUN. We have different levels of things, tests, et cetera, ejection fraction values. But then, I think it was Kristine. I think it was Kristine who brought up, “We also have history of a patient.”

Kristine: Right.

Joe: Hey, what about history of a patient. What about history of readmissions? How about history of admissions? How about history of like just days coming back? And so then—

Kristine: Right. What—

Joe: Yeah, go ahead, Kristine.

Kristine: What we kind of did was to sort of, as Joe said, we have the history of the patient including their lab data, their encounter data. Did they keep appointments in the past?

We were specifically looking at the no-show and cancellation rate of each patient because we thought that their past behavior, in keeping those appointments thinking ahead to this patient, studies show that a follow-up appointment within seven days with a cardiology provider really helps to reduce readmissions. So we wanted to know “What’s the propensity of this patient to keep their scheduled appointments?” We felt that that might be a predictor if they kept that appointment. We also looked at their—

Joe: Yeah. Can I talk to that point there?

Kristine: Sure. Go ahead, Joe.

Joe: I’m sorry. Sorry to interrupt you.

Yeah, so the hospital system we work with, they are big on– you know, there’s these follow-up appointments and they say, “Hey, we feel like these patients should be going to some clinics, et cetera, for X amount of days so they can have some follow-up care.” And we have found that the patients that complete those appointments and don’t move them outside the time— and not only appointments but timely appointments. So some patients are like, you know, they have to cancel it or move it outside of the magic window. I don’t know if there’s a magic window, but that timely window. And there’s a lot of patients that, for whatever reason, needed to move it outside of the window.

And so, Kristine had a great— I mean, since it’s such a great indicator of someone being re-admitted or not, “Hey, let’s look at their history of canceling” – just not showing up, different things like that. And, amazingly, we saw it was a massive predictor of whether someone was going to be re-admitted or not.

Mike: That’s awesome.

Joe: Hey, you have a history of not showing up to appointments.” And then we can– the goal is to then give them, some providers– try to give that to them and say, “Hey, they do have a history of not actually showing up to the appointments.” They can start developing a process around that. “Hey, well, what can we do to help get them to those appointments?”

Levi: Yeah. So many fantastic features. So how many were tried in total?

Joe: How many—

Levi: Roughly, variables did you bring into the model?

Joe: Oh, 88 variables, we used. So we tried 88 different things. Anywhere from, “Hey, is the patient homeless?” to the month. I don’t think we tried like full moon or anything like that. Maybe we’ll try that [inaudible 00: 20: 00]

Levi: The next generation [inaudible 00: 20: 02]. Yeah.

Joe: Maybe we should consider that next time. But like one thing, we actually settled on about 24 variables that really made it strong.

Levi: And the top three, just to reiterate?

Joe: Oh, yeah.

Levi: The number one was no-show count?

Joe: No, no, no. So number one actually is excess days in acute care. And excess days in acute care is an interesting one. It’s focusing on readmissions, just a slightly different look at it.

So basically, let’s say you’re admit day, January 1— or discharged. Let’s say your discharged January 1. For the next 30 days, what it does is it looks to see if you have any readmissions within those days and then it totals out those days in acute care. And so, looking at that number was our number one predictor of whether or not someone was going to be readmitted. It was interesting because we weren’t including that at first. Once we included that, oh, our confidence score jumped at least 10 points.

Levi: Amazing. AUC.

Joe: Yeah.

Levi: Yeah, amazing.

Joe: Oh yeah. Yup.

Levi: So, it seems like—

Kristine: And I think that’s because excess days in acute care or evac is looking–

Levi: Those DURs are so inscrutable at times.

Joe: I think– say that again, Kristine.

Kristine: I think that the reason that that was maybe a little bit more predictive and increased the performance of the model is that unlike readmissions, it only looks at acute care returns to in-patient status. Evac looks at healthcare resource utilization in the observation area and also the emergency department. So it gives a broader perspective on that patient’s history of healthcare utilization.

Mike: That’s a great point.

Joe: That’s a great point. Yeah.

Mike: So kind of moving into model building then, you used the random forest model with healthcare.ai?

Joe: Yes.

Mike: What was the experience like?

Joe: Yeah. And my experience— like I said, in the beginning, my experience with random forest was very minimal.

Mike: You’re familiar with forests? Yeah?

Joe: Yeah. I’m familiar with forest and the like.

Mike: Yeah. [inaudible 00: 22: 19]

Levi: Predictable forest–

Joe: Predictable forest?

Mike: [inaudible 00: 22: 20] random forest.

Joe: Random.

Okay. But anyway, so the cool thing was there’s a couple of different models that you can choose from. The thing that I was nervous about, I’m like, what if I choose the wrong one? Like, what if I choose the wrong model?

Levi: Good point.

Joe: And the cool thing is, when you run the copy paste from the website, it’s literally like– it’s like, Step 1, do this. Step 2, copy this and put it into here. And when you do that, and you put your—you literally just change out your model or your SQL that you put in. It’s pointing at your table.

Anyways, once you do that, it’ll actually prepare models against each other. And then you go, “Oh, this one, it looks better so I’m going to choose that one.” So we ended up going with random forest because it was the stronger model. And yeah, it’s doing really, really well. We’re coming out of testing pretty soon but—

Mike: Yeah, so it’s been in testing on production servers for what, about a month now?

Joe: Almost.

Mike: Yeah, almost a month. So 30-day readmission model, we have to wait just a little bit more than 30 days before we get enough data to see how it really performs in the wild, as we say.

Joe: In the wild – forest.

Mike: How’s it looking so far? I don’t want to hold you to any– we won’t quote you on this but is it looking good or— ? You can just give a general picture.

Joe: Yeah.

Are you recording me at all?

Levi: Yeah. Yeah.

Joe: So I’ll try to be—

So, so far– so what I tried to do was kind of categorize. So what happens is you run the model or whatever and it gives you a result. The results kind of look like this. It has the patient encounter— anyways, and then it gives you a predicted probability number. Like 0.84 or .09—

Mike: So like between 0 and 1?

Joe: Yeah, between 0 and 1.

Mike: Okay. 

Joe: Yeah. You’re right.

So like 89% chance, or 1%, or whatever. Anyways, so I tried to categorize those. Just on my side, I just tried to categorize, “Hey, if they are above 0.6, let’s say—I’m still working on the cutoff numbers. But we’ll call it a severe case. Like, “They have a severe chance of being re-admitted.” Severe. Then, maybe high chance; maybe medium chance; low; very low.

Mike: So you can kind of stage the patients differently based on their scores?

Joe: Yeah.

Mike: Or at least give recommendations on how to stage them based on those scores?

Joe: Yeah, exactly.

And it’s kind of interesting because the results that we have, coming in, are— I mean, those patients that have been categorized in the severe category— I’m replicating my words.

Anyway, so those patients are being currently re-admitted at 84% or 85% readmission. I mean, we’re getting a high correlation of those that are saying “Yes, you’re going to be readmitted.” They are actually being readmitted.

Levi: That’s amazing.

Joe: And the window is still open. Still, a little bit so–

Mike: Yeah, so we could even do better.

Joe: We could do better. Yes. [inaudible 00: 25: 30]?

But not only that, so the opposite – the very low, it’s a 0.09% readmissions.

Mike: Okay.

Levi: Less than 10?

Joe: Less than 10, yeah. Less than 10? Yeah.

Levi: [inaudible 00: 25: 50]?

Joe: Yes. So it’s just amazingly low. So it looks like both ends are working.

Mike: Yeah.

Joe: And then the middle categories seem to— they feel like that’s the right readmission rates and—

Levi: Great.

Joe: So, anyway.

Levi: That’s fantastic. We’re so excited for this, Joe. We appreciate you helping out. And Kristine as well.

Mike: You saw it here first. I think this is the best heart failure readmissions model in existence.

Levi: Yeah. 

Mike: I needed to say it. I had to say it.

Levi: Preliminary results but we’re very excited.

Mike: Yeah, very promising.

Joe: Okay, so just [inaudible 00: 26: 18] because I feel like we’ve had a lot of assistance in this. Kristine is almost magical. I mean, she is so amazing, just with her clinical background and just being able to think through really what is going to cause someone to be re-admitted. Someone that you can partner with that has that clinical background and can really think through things. And just have that discussion with yourself, “What can really cause this person to be readmitted?”

Levi: Yeah.

Joe: Was super enlightening. And then—

Okay. I don’t want to plug too much but I’m telling you guys, the healthcare.ai was so easy. It was so easy. And it was really cool because at the beginning I was quite nervous and hesitant. I was eager but hesitant just because—

Levi: Petrified?

Joe: Petrified, stupefied [inaudible 00: 27: 14]. Is that [inaudible 00: 27: 15]? But it just made it easy. Like, just step by step and—

Levi: You’re too nice, Joe. You’re too nice.

Mike: Yup.

Levi: Right. We’ve got some chat here coming through. And some people have asked, “Okay. Well, you’re getting the probabilities coming back for a person’s risk readmission, do you notice the top variables?”

Joe: Yes, yup.

Levi: So just for some context, we try to provide some prescriptive context about that person and say, not only why are they high risk but what are the variables that are diving that high risk? So have you noticed that? Those three columns at all?

Joe: Yeah, absolutely. So the excess days in acute care comes up a lot, history of readmissions comes up a lot, and I think the other one was—

Mike: You know, that’s funny. In the no-show model, we see a history of no-shows being really important.

Levi: Huh.

Mike: It’s almost like if you do it wrong once, you’re likely to do it wrong again.

Levi: Yeah, very connected.

Joe: Yeah.

But anyways, those are a couple of the tops. It was just kind of interesting.

Levi: Yeah, yeah.

And we’re working on new features for healthcare.ai such that in the future this guidance will be based on what a clinician can actually modify. So right now, we just sort of the top three most important factors. But in the future will be things that they can actually change about the person.

Mike: You can’t make a person younger, I guess.

Levi: Sadly not.

Mike: Medicine hasn’t figured that one out just yet.

Levi: That’s right.

And then another question from the chat was, “How long after admit is this guidance available?”

Joe: After admit? Oh, okay, so the coolest part is, so it’s on a delay. Our data warehouse is on a lag. So the idea is, as the patient’s admitted Day 1. Day 2, we have results.

Levi: That’s amazing.

Joe: Yeah.

Levi: Wow. So it seems pretty much—

Joe: Oh, but the— so let’s say their stay is—let’s say it’s five days, six day, ten days or whatever. We are going to update that prediction score every single day. So let’s say there’s a certain level that comes in like, “Hey, glucose level” or whatever it is, then hopefully a provider can go in and do something right there.

Mike: So like new lab results–

Joe: Yeah.

Mike: –will get used in the new predictions?

Joe: Correct. New lab results will be updated every single day. And so, they can see— and I’ve actually followed the prediction scores for a patient every single day and they do change. It’s really interesting. So

Levi: That’s fascinating.

Mike: Trending.

Joe: Yeah.

Mike: [inaudible 00: 29: 50]

Levi: Yeah. There you go.

Joe: Yup.

Levi: Beautiful. Fantastic stuff today, guys. Great work.

Kristine, do you have any other comments. We appreciate you joining us. We appreciate your expertise.

Mike: Oh, Kristine can’t hear you.

Levi: That’s right. How about that?

Joe: Kristine, do you have any other comments before we sign off here? It’s been great having you.

Kristine: I don’t. But this has been a great discussion. And I only hope that you guys can glean something from what we shared today. Thanks.

Mike: Certainly. I think we will. And thanks so much for participating.

Joe, thanks a lot. It’s been great having you. You’ve done some awesome work and I’m glad we get to share with the world [inaudible 00: 30: 33]

Levi: Thanks, Joe.

Joe: Thanks, world. No, I’m just kidding.

Thanks guys. I appreciate it.

Mike: And be sure to subscribe, like the channel. We’ll interact with you on Slack all week. And hope to see you next time.

So thanks everybody.

Levi: Thank you for joining us today. Remember to like, share and subscribe. Comment below and click the links below to join our Slack Channel and our healthcare.ai community.

What topic or projects should we feature?

Let us know what you think would make it great.