Healthy Data Podcast_ Michael Kirchhoff (Cooper Health) & Jordan Cooper (InterSystems)-20240909_140336-Meeting Recording
September 9, 2024, 6:03PM
23m 47s
Jordan Cooper 0:03
We’re here today with Doctor Michael Kirchoff of Cooper Health.
Jordan Cooper started transcription
Jordan Cooper 0:08
He is the chief innovation and Patient safety officer.
Michael, thank you for joining us today.
Kirchhoff, Michael 0:12
Thank you for having me.
Jordan Cooper 0:14
So for those who don’t know, Cooper University Healthcare is an academic health system, pacing Camden, NJ with 900 physicians and 663 beds.
Today we have a dual topics that I think will be of interest to many of your peers across the United States, and that those topics are metadata and technical debt.
Many organizations have a laundry list of things that they like to accomplish, and Dr Kirchoff, you’ve been working to ameliorate your technical debt.
Tell our lesson ners kind of what?
You know what is the breadth of your technical debt?
How do you prioritize?
How are you tackling this problem and innovative way over at Cooper?
Kirchhoff, Michael 0:58
Sure.
So, you know, in general, I think a lot of organizations are grappling with technical debt and really the point of view that I bring from an innovation of patient safety side is the safety implications of technical debt.
Jordan Cooper 1:04
E.
That.
Kirchhoff, Michael 1:13
So we have a great team on the IT side and on the budgetary side of really addressing technical debt and really focusing on making sure that our life cycle plans make sense and are keeping up with the latest technology.
I have no complaints about our team there.
They really do a fantastic job and our Co CEOs, especially Doctor Mazzarelli, is really focused on making sure that life cycling not only from our applications but also our hardware that we’re really focusing on that as an organization.
Jordan Cooper 1:36
Yes.
Yeah.
Kirchhoff, Michael 1:43
But what we talked about earlier was there’s this other side of technical debt that I see looking at patient safety and I hear about from other organizations when we talk not only about technology, but also about patient safety.
Jordan Cooper 1:46
That’s.
E.
It.
Kirchhoff, Michael 1:59
And that’s specifically there is another impact of technical depth other than the fact that I just have an application that’s that’s not being supported anymore, right?
Jordan Cooper 1:59
Yeah, E.
Kirchhoff, Michael 2:08
That’s what everybody knows about, or I have a piece of hardware that is now out of warranty and I can’t get it serviced and now I suddenly have a large capital outlay.
Jordan Cooper 2:12
And.
Kirchhoff, Michael 2:17
That’s the technical debt most people are familiar with, and I think a lot of healthcare organizations are really understanding that they need to focus on that and understand that they need to address that.
Jordan Cooper 2:27
Yeah.
Kirchhoff, Michael 2:28
But there’s other implications, and that’s really what I wanted to talk to your listeners and viewers today about.
But just you know, there are other impacts besides the bottom line besides the money we need to spend to keep our systems up to date and our applications running properly and our hardware safe from cyber attack and still within warranty.
Jordan Cooper 2:41
Yeah.
Kirchhoff, Michael 2:51
And that’s when I postpone doing some work.
Jordan Cooper 2:52
That’s and.
Kirchhoff, Michael 2:56
What are the implications for workflows and patient flows and patient information?
And I think that’s the thing that a lot of people lose focus of.
Jordan Cooper 3:05
Right.
So the main mission of a hospital or healthcare delivery system is to provide care increasingly with the transition to value based care.
It’s also to keep patient populations well and out of receiving at least duplicative care or potentially avoidable care.
So could you walk our listeners through a concrete use case or anecdote of the last year or two when you’ve had conversations with leadership and with clinicians on your team about the implications of the patient safety implications of technical debt and how that actually played out and affected the prioritization of reducing that technical debt?
Kirchhoff, Michael 3:45
Sure.
So a lot of times what happens is if you’re gonna incur technical debt, there’s there’s the things we just talked about.
But there’s also a latent errors built into that.
So for example, you’re expanding as an organization and you’re electronic health record has a facility structure and that facility structure, there’s a lot of overhead associated with that.
So for example, we need to build that out and test it, but then there’s also all the other dependencies on that facility structure.
All the downstream systems, right, I don’t think anybody has a single best of breed.
We all have legacy systems that are emirs are interfacing with any changes in facility structure and then metadata sociated with what’s going on with patient flow or place of care, et cetera.
Jordan Cooper 4:21
Yeah.
Kirchhoff, Michael 4:34
All those require quite a bit of overhead on as far as testing and validation, and sometimes when organizations grow they want to grow and get those new spaces up and running, and they may not.
Jordan Cooper 4:43
It.
Kirchhoff, Michael 4:48
They either may not do the testing, which is a big mistake, but more often than not they say, well, let’s use existing structures because I can’t afford to do the testing.
Jordan Cooper 4:56
That.
Kirchhoff, Michael 4:57
And I realized that testing is important.
So now you’ve introduced latent errors in the system.
I’ve the technical that I’ve taken on is not to building out the representation of my organization appropriately, and then that may have potential implications later on.
Jordan Cooper 5:12
E.
Kirchhoff, Michael 5:14
So for example, if you’re in an organization and for many organizations have disaster plans, and in those disaster plans they’ve built out in their facility structure, disaster beds and now we have a sudden need for new space.
Jordan Cooper 5:15
Yeah.
Yes.
Now if E.
Kirchhoff, Michael 5:27
So an organization may consider using some of those best designated for these Serge plans as patient care area.
But the point is there are downstream dependencies related to that, so classic patient safety example would be I’ve built out.
Jordan Cooper 5:39
Well, uh. Uh.
Kirchhoff, Michael 5:44
I’ve used existing facility structure that caused the bed.
Jordan Cooper 5:47
At all.
Kirchhoff, Michael 5:48
You know, Serge, bed XYZ and now there’s a code blue or medical emergency in that and that goes to our downstream legacy systems to our operators at call an overhead page and now they’re calling overhead page to search space XYZ as opposed to some physical location that clinicians know about.
Jordan Cooper 5:48
And.
E.
Yeah. Anything.
Kirchhoff, Michael 6:07
So by kicking down the road, even building out facilities structures and doing this downstream testing, etcetera, you introduce latent errors in the system that may have implications later on.
Jordan Cooper 6:17
Yeah.
Kirchhoff, Michael 6:19
And in our owner organization, as we identify those, you know we then go to leadership and go, hey guys, you missed this and we go back and do that.
But you know when you’re initially thinking about the total cost of ownership around expanding an organization and expanding the facility, you really need to think about there’s other costs associated with that, not just the PCs I’m putting in that space or the monitors I’m putting in that space.
Jordan Cooper 6:29
Uh, OK.
And.
It’s.
Yes.
Kirchhoff, Michael 6:44
But it’s also that I really look at the facility structure and how that, how that interface all our downstream systems et cetera.
And that’s something that the safety side, we’ve been drawing a lot of attention to so that we don’t fall into that risk and introduce those latent errors into our system and can protect our patients in the future.
Jordan Cooper 6:58
There’s.
Kirchhoff, Michael 7:04
When that information then flows, the other downstream systems.
Jordan Cooper 7:08
Yeah.
And you also mentioned not only physical the integration of physical and digital, but also just within the digital.
For example, when you’re out of, organizations are going through an application rationalization.
If you’re mapping old metadata to new metadata, then the lookup tables don’t work anymore.
For example, in some cases, and.
Kirchhoff, Michael 7:26
Yeah.
Yeah.
And that’s another.
That’s a classic example.
I know we were gonna talk both about technical debt and metadata.
This is where the two really meet, right?
Jordan Cooper 7:34
Yeah.
Kirchhoff, Michael 7:34
Because with all these transactions is the metadata, the physical location and the virtual location, and those have to align.
Jordan Cooper 7:43
And.
Kirchhoff, Michael 7:45
And sometimes you know, for example, we build a new tower and now we have, we’ve built out all this new facility structure.
We have old facility structure and that metadata is someone has to go back and rationalize that and make sure it all aligns.
Because if you don’t fix those, you potentially introduce latent errors and those latent errors might be as dangerous as sending somebody to the wrong location for a code.
But they might be inside it.
Jordan Cooper 8:09
But.
Kirchhoff, Michael 8:11
Insidious.
Like for example the accommodation code that’s associated with that facility structure may come across as a Med surge when reality it’s a step down or critical care space.
Now you’re missing out on that facility that facility charge because the metadata on the, the the accommodation code is different in somebody forgot to fix that.
Jordan Cooper 8:23
Yeah.
Now that that.
Kirchhoff, Michael 8:30
So there’s a lot.
There’s a lot of cost to rationalization and growing and part of that is the safety component and the metadata associated with it.
Jordan Cooper 8:36
And.
Kirchhoff, Michael 8:39
But it also has also understanding the total cost of ownership of these migration strategies and making sure you’re going back and having the time and resources allocated to make sure you’re looking at all that legacy stuff and doing all the appropriate stress testing to make sure that information flows to where it needs to flow.
Jordan Cooper 8:45
Uh-huh.
And then.
OK.
Kirchhoff, Michael 8:59
Billing is proper and that those physical locations match up with the virtual location in the system.
Jordan Cooper 9:03
Ah, so it sounds like if any of our listeners are persuaded and interested by this application rationalization, technical debt awareness about the implications technical debt has on patient safety, it sounds like a lot of governance and processes and testing or kind of what they need to begin implementing in order to account for these risks.
Kirchhoff, Michael 9:27
Yeah.
What it comes down to is just best industry practice, right?
Your your information technology department really needs to make you know good stakeholder analysis.
Good project management, good communication skills.
It’s all stuff.
Everybody who’s listening just knows about, but what a lot of teams fail to realize is the extent of it and how many resources you actually need to do a full stakeholder analysis and a full risk analysis of these minute mitigation migrations to make sure that you’ve looked at all those downstream systems and not just that the information flows.
Jordan Cooper 9:55
OK.
OK.
Kirchhoff, Michael 10:01
But then when it pops up at the other end, does that end user?
Is that does that data make sense to them?
Do they know where to go?
And if you forget to look at your operators and now you’re pushing them physical locations that no longer make sense or you’re pushing it to your coders and you’re billing department, and then those accommodation code.
Jordan Cooper 10:08
So.
Yeah.
Kirchhoff, Michael 10:19
So it makes sense you’re going to miss out.
Jordan Cooper 10:22
So I did mention just a moment ago about metadata, which is the second topic that we’re going to cover today.
And on the I mentioned kind of screwing up mapping between lookup tables when you’re mapping from old metadata new metadata, the topic of metadata itself is becoming of increasing interest to many healthcare delivery systems as Gen AI rises in prominence as a topic that everyone is focusing on, in particular when healthcare deliveries are sons are looking to integrate Gen AI into their healthcare delivery system, they often need to train models on their own.
Uh data of different kinds, whether it be Phi or other sorts of metadata, and I think we were today, we’re going to discuss kind of what, what data governance, what needs to happen to ensure that you have the right metadata in order to train your, your, your Gen AI models and kind of what is what we’re best practices in the past and how they how are they evolving now.
Kirchhoff, Michael 11:26
Yeah.
So this is a hard question to answer, and obviously there’s the broader concepts of like bias in the data, et cetera and healthcare or qualities when you’re deploying these systems.
And that’s where that data really shines.
But the other is the the tipple.
Set the simple lasers.
You don’t know because when we look at these systems and we’re looking at sort of the emergent characteristics of the hidden layers in these systems, we don’t really understand a lot of times what those features are, that the system is keying on to understand to make that generative decision.
And for humans, a lot of times, how many people?
The diagnosis makes sense, right?
But it may turn out that location may play a role there.
Jordan Cooper 12:11
At.
Kirchhoff, Michael 12:13
So when we’re thinking about the data and the metadata, you’re going to use in training systems, the more data the better, right?
Jordan Cooper 12:14
E.
Huh.
Kirchhoff, Michael 12:20
We’re seeing that across the industry, bigger generative systems, the bigger the data set, the better, the more rich the data system that we’re rich the data the better and other contexts of that data, specifically the metadata, right.
Jordan Cooper 12:26
Yep.
And.
Kirchhoff, Michael 12:34
So maybe not just patient demographics and diagnosis gets me to where I need to go, but maybe the system at the hidden layer is keying on location data, time of day, data, et cetera.
Jordan Cooper 12:41
Well.
Kirchhoff, Michael 12:48
So if you’re not saving that data and then putting it into your datasets for your training, you may be missing out on that and what further confounds that is, you often don’t know how the original basis that was trained was the basis of trade with that metadata are not thus introducing that metadata aligned with the previous training regimen or not.
Jordan Cooper 12:53
Yeah.
And.
That’s.
Kirchhoff, Michael 13:12
And then when there’s a mismatch between the data set of initial training and then fine tuning, how does that affect the emergent features in the latent space, and how does that affect the decision making the weights of that and that’s hard to know.
Jordan Cooper 13:13
And.
Kirchhoff, Michael 13:28
So it’s not only important to have this data because it helps the AI key on items that the human observer may not key on and get you more robust answers, but it also has implications for off the shelf AI that now you’re trading and what was the training regimen initially?
What was that data?
What was the metadata and do those things align and then the bigger 50,000 foot view in as much as that, what this has been a day to do with respect to bias and all the other things that we’re really focused on in healthcare and and so much show that Joint Commission is starting to focus on this and the FDA et cetera.
Jordan Cooper 13:48
That.
Great.
That.
Yeah.
Kirchhoff, Michael 14:06
You know this bias in these general systems is a hot button topic, and the metadata and understanding the context of the data and the person that is part of that data in system is super, super important for understanding bias and outcomes.
Jordan Cooper 14:20
Yes.
You know, some of the examples you provide remind me of classic public health.
Uh case studies, for example.
I think in the 1970s.
There was not clear evidence at the time that both systolic and diastolic blood pressure were worth measuring it.
There was meaning to the lower number and so we didn’t use to collect that information.
And it wasn’t only until later that we incorporate that we found, I think Professor Mike Clagg from Johns Hopkins found out, hey, you know what?
Both numbers actually do have value and meaning, and gosh, I wish we’d been collecting both numbers for a long time so it could be for future discoveries where another one that’s kind of a fun one to to talk about is before the Queen Indoor Air Act in the late 20th century.
Coffee drinking was an indicator of lung cancer because it was correlated with smoking.
People used to have a cigarette with their coffee, and so it may be helpful to predict lung cancer by capturing who’s a coffee drinker.
But of course, those are the sorts of things that just seems so irrelevant.
Kirchhoff, Michael 15:25
Yep.
Jordan Cooper 15:28
There’s no causal relation.
Why would I care about coffee if I want to study one cancer?
So just take that as an analogy for, you know, 60 years later with different kinds of contextual data and data elements and data space about what might be relevant and predictive for either a patient’s health or for something that could help train a Gen AI model.
Kirchhoff, Michael 15:50
Yeah.
And this is key in the emergence the emergent characteristics of the model.
The human again may think this data point is irrelevant to the outcome from a face validity perspective, but it turns out it actually is important for the model, and that makes inter till interoperability research very difficult.
Jordan Cooper 15:59
E.
Kirchhoff, Michael 16:13
It makes explain ability very difficult, so it’s a fine balance, so more interpretable.
I want something to be the more explainable.
I want to model to be the more I’m going to give data that has facility to a human observer that this makes sense that these are associated or correlated.
Jordan Cooper 16:24
That.
Kirchhoff, Michael 16:30
But it turns out more data density.
Other metadata may actually help the system become more accurate.
The converse is also true.
Sometimes it’s Data Aronofsky trains the system for outcomes, so again on the interpretability side and face validity side.
Jordan Cooper 16:45
OK.
Kirchhoff, Michael 16:49
Umm it it’s it can cut both ways and it just really speaks the need to do this type of research.
Jordan Cooper 16:53
Managed.
To metadata can provide a kind of audit if you’re trying to.
If they’ve been problem problems interpreting past data.
Kirchhoff, Michael 17:03
Yep.
Sometimes.
Uh, but it also may provide a more data rich environment from the model than to key on a characteristic of the data set that a human wouldn’t necessarily pick up on and potentially make it more accurate.
Jordan Cooper 17:15
Yeah.
Kirchhoff, Michael 17:17
It’s hard to say.
Jordan Cooper 17:18
I before we wrap up, I I’d love to hear about the contextualize this conversation about metadata in your work with sepsis.
I think that Cooper Healthcare is a national leader in sepsis care and I think you have an interesting story about metadata and stuff.
Is that right?
Kirchhoff, Michael 17:35
Yeah.
So early on we were doing a lot of the the this early work and understanding sepsis.
We were collecting what had face validity.
What’s your heart rate this hour?
What’s your heart rate?
Next hour, what’s your blood pressure this hour?
What your heart rate, your heart.
Blood pressure the next hour and it turns out that that overall trend is really, really important.
But when you want to start to create models that predict that earlier and earlier and earlier, the amount of variation minute to minute, 2nd to 2nd is actually a stronger predictor of patients that are going to then deteriorate before the obvious deterioration of high heart rate and low blood pressure.
Jordan Cooper 18:12
Uh-huh.
Kirchhoff, Michael 18:12
Once you have high heart rate and low blood pressure, you know you’re really behind 8 ball picking up that decrease in variation earlier on is very important.
Jordan Cooper 18:18
That.
Kirchhoff, Michael 18:21
So when we did a lot of our initial early studies, we were capturing at the bedside heart rate and blood pressure is emitted to minute 2nd to 2nd.
But in the data sets that we exported, we did it every hour.
And then what happens is we did some great research on that space and then we started to learn that understanding this data at smaller and smaller time intervals and understanding that variability could be predictive, we couldn’t go back and redo those studies because we just jettisoned all that other data that would have been potentially helpful.
Jordan Cooper 18:40
And.
Kirchhoff, Michael 18:53
Now the issue there is back in the day, is it very expensive to store all this data as that’s getting cheaper and more inexpensive and easier to do?
Jordan Cooper 18:59
Happy.
It.
Kirchhoff, Michael 19:04
I would always make the argument that just collect as much data with as much density as you can, because you never know when that’s going to be helpful when it’s economically feasible to do it.
Jordan Cooper 19:06
OK.
That.
Well.
Kirchhoff, Michael 19:14
I wish we had that second to second.
A heart rate data because we could have done a lot more research.
We ultimately went on to do some of that work, but the initial work really the data set didn’t support some of the new findings people were getting out of that data, which was that second and 2nd variation.
Jordan Cooper 19:24
E.
And when it comes to sepsis, just even having like 20 minutes earlier notification or an hour or notification can make a lot of difference, right?
Kirchhoff, Michael 19:40
Umm yeah.
Jordan Cooper 19:41
So how would you advise health systems listening to this?
How to balance a cost of just collecting a bunch of data?
And by the way, if it’s a lot of information that’s not interpreted and it’s not, and there’s just garbage, you know it needs to be.
How do you advise an organization to not?
You can lose the meaning of the data if you just have everything collecting everything all the time.
How do you advise them?
How to the balance, cost and organization and future value potential value?
How you how would you do that?
Kirchhoff, Michael 20:15
Yeah.
No, that’s a really good point because you know the classic counterexample here is I OT right if I if I have my watch collecting a bunch of accelerometer data and heart rate data, but it doesn’t know the difference between what I’m sleeping on, I’m awake or when it’s off of when it’s on, it’s etcetera.
There’s a lot of garbage in that data, and despite having great data density.
Jordan Cooper 20:30
Yeah.
Kirchhoff, Michael 20:36
Might not be helpful, so it’s the same as the technical depth discussion.
Good stakeholder analysis.
Getting the right experts and subject matter experts in there to talk about it.
Know what the latest literature shows in terms of how these data points may relate to the outcome you’re interested in?
And then balancing that with the total cost of either curating the data to your point is, does the data make sense?
Jordan Cooper 20:53
And yes.
Kirchhoff, Michael 21:00
Is the data clean or is there garbage right that there’s a cost related to that?
Either technical or human to look and validate that data, and then as the ultimate cost of that storage and manipulation of that data, so understanding the need and the potential benefits of more metadata and data density and match that with the total cost of doing that work and then having just really good project management getting the right stakeholders at the table and having that conversation because in the past we just never had that conversation.
Jordan Cooper 21:20
The.
It.
Kirchhoff, Michael 21:33
So my recommendation would be just have the conversation so at least you know what you risk by not saving all the data.
Jordan Cooper 21:34
So.
I appreciate your time with us today, Michael.
We covered a lot of information from meta technical debt to metadata.
You’ve kind of already addressed this, but if there was something that you wish you knew, if you could speak to Michael Kirchhoff of four years ago.
What?
What would you tell yourself and engaging in with kind of considering patient safety with regards to technical debt and reviewing metadata, what would you tell yourself?
Kirchhoff, Michael 22:11
To really think about the implications of it, what I find myself doing a lot of now is really looking at the implications of this data as it pertains to bias and equality and access in healthcare.
Because that’s ultimately we’re charged with the good keeping and the health of our communities and knowing how whatever tool we’re applying applies to our particular communities and their demographics and their data is something that I think every health system struggles with.
And it’s a big item for us to have an eye on because we’re, it’s our mandate, right?
We need to care for those in our communities and those communities vary from place to place and time to time, and knowing how systems may or may not meet the needs if a particular community is something that we’re focusing on a lot now because in the past many vendors did not focus on that and we find we’re playing a little catch up on that right now and knowing that back in the day and really asking those questions, especially of the vendors who were creating the systems to make sure that they have inclusive data that is represented.
Jordan Cooper 23:07
That’s.
And.
The.
Kirchhoff, Michael 23:25
Of our patient population and that that data is free of biases is something those were those were not frequently asked questions three, five years ago.
Jordan Cooper 23:27
And.
Kirchhoff, Michael 23:34
Those are questions we asked every time now.
Jordan Cooper 23:37
For our listeners, this has been doctor Michael Kirchhoff, the chief innovation and Patient Safety officer at Cooper Health.
Michael, thank you so much for joining us today.
Jordan Cooper stopped transcription