This session covers the groundbreaking potential of AI in patient care and healthcare operations and provides valuable insights into balancing innovation with integrity and patient safety in the evolving landscape of healthcare.
My name is Rachel Bardwell. I am an internal medicine physician here at Barnes Jewish Hospital and a member of the symposium planning committee. We are so pleased to be back again in person and thank you so much for joining us. I would like to thank Barnes Jewish Hospital, WashU Medicine, Saint Louis Children's Hospital, BJC Healthcare, and the Foundation for Barnes Jewish Hospital for their continuing support, which makes today's symposium possible. Before I bring Doctor Henderson up to kick off the day, I'm gonna review a few reminders. Just as a reminder, in order to receive your continuing education credits, you are required to complete all the surveys at the end and to register by 8:30 this morning at one of the tables. In terms of disclosures, there are no relevant financial relationships identified for any member of the planning committee or any presenter or author today. Thank you to BJH Center for Practice Excellence for providing today's CNE. Yeah. Yeah. All right. You can download the meeting app on your smartphone. You should have been given a, um, a card or a sheet at sign in to do that. So, um, all participants, you can access the symposium surveys, uh, via the Meet app website. And then after completion of the surveys, um, remember you have to fill out all the surveys to get that continuing education credit. So we congratulate all 31 abstract authors. We encourage you to stop by those 28 posters in great room B and later today we'll hear from, uh, 3 oral presentations. So as a reminder, um, we will be giving away attendance prizes, collect 10 stickers from the poster poster authors in great rooms A and B to be eligible. And as a reminder, you must be present to win, so stay all day with us. We have some really good attendance prizes this year. Um, if you kind of preview this during the pre-show slideshow, you could see there's some really great things including Cardinals tickets, a massage, a night at a hotel, so it's gonna be great. I wanted to call out Patient Safety Awareness Week, uh, March 9th through 15th. This is a team effort. Patient safety takes the whole team. So thank you so much to all of our team members present today for being vigilant for patient safety. Here are our quest winners for 2024. And as a reminder, submissions, um, for 2025 open in June. Again, To get your CE credit, fill out all the surveys. I think some of these are duplicates. So I think we're good. To start off the day, I'm gonna invite Doctor Henderson up. She's the, um, Chief Medical officer for Barnes Jewish Hospital. Welcome again. Good morning. Good morning and welcome. We appreciate the opportunity to collaborate, share information and ideas with our colleagues across many healthcare environments. Since the inaugural 2011 symposium, so it's been a few years we've been doing this now. We've explored a variety of topics including highly reliable principles, safety culture, health across the continuum, the electronic medical record, and organizational transformation, just to name a few. Today, we will delve into artificial intelligence and although it's just the tip of the iceberg, it is perhaps the beginning of understanding how AI can support our ultimate goal of improving patient outcomes and quality of care. We look forward to learning from each speaker as they share their wisdom, experiences, and expertise to move us forward on our journey to high reliability care. So we thank you for attending and participating and hope you enjoy the day. Before we get started here, this is the Doctor Clay Dunnigan Safety and Quality symposium, and I do want Clay to come up to be recognized, and he's gonna help us kick off the introductions. I see a lot of folks in this audience. And including myself, who have benefited from Clay's wisdom and knowledge over the years. Um, Clay served as our chief clinical officer for BJC Healthcare for many years until his retirement, and I think it was April of 2023. Is that right? Seems like just yesterday. Um, and so, um, to, um, honor Clay and all of the work he's done in the arena of safety and quality and patient experience, um, through a, a generous gift from the BJH Foundation, the symposium was renamed in his honor. So I would like to thank you for sharing your wisdom and experience over the years and invite you up to the podium. Thank you. Good morning. Uh, it's a pleasure to be here. Uh, when the planning committee first approached me about speaking, my first instinct was to say no, um, and then I started thinking about the kinds of questions you might be asked by your colleagues after they found out you came to this, and I thought one of them was probably, is Doctor Dunnigan still alive? So I figured I could solve that by just standing up here and saying a few words, although we'll see. Um, you know, uh, let me start by saying how incredibly honored I am to have this symposium bear my name. Uh, I think anyone who knows me knows how passionate I am about the safety and quality of healthcare, and, um, you know, this symposium for more than a decade and a half, I think, has been bringing forward innovative ideas and inspiration for all of us who labor to make care better. Um, this year's theme, the unlocking AI to improve healthcare is, uh, particularly timely and I think, uh, going to be a great addition to the sequence of, of talks that we've had. Um Uh, the, the, um, the idea of AI, I, I was talking to my wife about it and and said, you know, that's, this is what she was asking me and I said this is what the conference is gonna be about, and I joked with her. I said, of course that's probably not much of a revelation to you because you've been saying for years I'm a great example of artificial intelligence in health care. Uh Undaunted, I, I put it into, uh, my favorite large language model that came back with the shortest reply I've ever seen and it said, not funny. Um, lame AI jokes aside, um, I, I do think that artificial intelligence holds tremendous, um, promise for further ways that we can improve health care and make it safer for those we take care of, and I'm very eager to get into today's discussions as I hope you are. Um, with that, I'm gonna hand this over to my friend and colleague, uh, Philip Pay. Philip is the associate dean for health information and data science for Washington University. The chief data scientist for the medical school and the founding director of the Institute for Health Care Informatics. Um, he's also a professor in the Department of Medicine's division of General Medicine and geriatrics. Um, and I'll add to that that, uh, Philip was very important in setting the theme for today and helping us identify the speakers. So I want to add my thanks for his leadership and collaboration. Over to you, Phil. So I do have to say before I start with the introduction of our speaker that, uh, while I love that this conference is named after Clay to recognize his contributions, I can actually take that to the next level. When Clay was looking for a place for an office after he, uh, retired, I decided to have him move in two doors down the hall from me. So instead of just having one day, I get all the days to be able to walk down the hall and ask Clay, uh, questions and benefit from his wisdom. So thank you, Clay. Um, so I have the pleasure today of introducing our first speaker, uh, Doctor Genevieve Milton Meukes, uh, who's, uh, a great friend and colleague of mine, and I could spend a lot of time telling embarrassing stories, but I will not, um. But Genevieve is a professor of surgery and health informatics as well as senior associate dean for health informatics and Data Science, director of the Center for Learning Health System Sciences, and associate director of the clinical NLP program at the University of Minnesota, make her one of the very few people that I think has more titles than I do. So I really, I like that for this introduction. Uh, she's also on the executive leadership team of the University of Minnesota Wide Data Science Initiative. Doctor Milton Meukes is a practicing colorectal surgeon and serves as the Chief Health informatics and AI officer for Fairview Health Services, which is an integrated healthcare delivery system based in the Twin Cities. And in that role, she leads health informatics functions that leverage the EHR health information technology, and data-driven decision making, as well as Fairview's evidence-based care program. And in leading that system level AI program, she is advancing the use of AI and machine learning capabilities and solutions that will provide new opportunities, value and care transformation in a safe and compliant manner. Uh, in addition to all of those roles, uh, Doctor Milton Meeks has an incredible research portfolio. She's generated over 300 peer reviewed articles, book chapters, and editorials, has extramural funding through the NIH, AHRQ, akori, and the FDA, and her interests includes surgical informatics, improving note usage in EHRs, evaluating technology solutions and practice, advancing learning health system capabilities, and the generation of real world evidence and clinical natural language processing. I think it's also important to note that Doctor MiltonBuu's uh profile is, uh, not only, uh, local and regional but national and global. She's an elected member of the National Academy of Medicine, a fellow of the American Surgical Association, and the American College of Medical Informatics. She's the immediate past president of the American College of Medical Informatics and the current president and chair of the board of the American Medical Informatics Association, which actually makes her one of my bosses in, uh, my own job. So, uh, with that, uh, I will turn the lectern over to Genevieve. All right, good morning, everyone. Um, this is an amazing crowd, um, and I was saying as I came in here, I need to get, we need to get something like this, um, where we're at at M Health Fairview in the University of Minnesota. It's a great testament to the culture, um, of this organization. All right. So I have no disclosures, um, and I just thought I would underline, um, Doctor Dunnigan, just as a whole, um, what amazing leadership and thank you for, um, establishing just, you know, propelling a culture of patient safety, uh, quality care, and really, you know, evidence-based care in a consistent fashion. This is, um, an exciting, uh, topic, um, near and dear to my heart. I will say we are on a road right now, and the question is, what road are we on? There's kind of two ways to think about it. Um, I hope to convince you over the talk that we're not on the road to hell, um. But, you know, this was, I think the theme of a previous, um, of a previous year's, um, symposium. And, um, hopefully we can think about that foundation so that we can be more reliable in this work. I put the slide up here. I like to joke that I'm on my 3rd or 4th avatar, and it's really to say that, and I know there might be trainees in the audience, there's a diversity of folks that are here, maybe in quality improvement, that type of thing, but I've taken a circuitous route to a career. And, um, I think healthcare still is one of the most interesting places to, um, do your work. It's incredibly meaningful. um, there's lots of ways you can apply and, uh, bring your talents. I was just very fortunate in many ways to, um, have started in data science go to medical school because I never wanted to sit behind a cubicle again and discovered the field of informatics and found different ways to bring that about. And had a period of time where I was very, very focused on operations. I'll talk to you about the current avatar at some point, um, about learning health systems towards the end of the talk, and then, of course, focusing a lot on AI. I'm gonna submit that we're in the middle of a transformation. A lot of people are gonna say it's AI, but I think it's more broadly something called a digital transformation. People hate that term, but, um, I think it's the broader transformation that we're on, and it's going and it's going to disrupt, it is disrupting healthcare, but it already is disrupting our lives. OK. So in when we think about digital health disruption. Think about our lives right now. Alexa, self-driving cars, the way we do retail today, completely transformed from a decade ago. Some people talk about the smack stack, social, mobile, analytics, cloud, we changed up the smack stack to AI and automation just because it's more of an emphasis possibly right now. And then when we think about health, digital health disruption, it is sort of these different things and. We'll talk a bit about how we maybe didn't do it too well with electronic health records and we're still recovering from that. Um, and when we think about what it's going to mean for bringing in even more technology and how quickly things are moving, AI brings us yet another opportunity hopefully to do it better and to continue to work on this. So. I'll take talk a bit about kind of how we've started to think about this and notice I did put AI and automation because these are chains of tools, it's not just one tool, um, and thinking about the hammers and nails and it's not just the technology. Digital doesn't work without people process and all the things that we think about in quality and safety, and when we think about how do we make healthcare better. So there is a convergence right now with AI and it is measurably different. And there are a lot of things that are bringing this about. Everybody here about Stargate. I don't know, but, so just to say that there's gonna be more investment, um, there's gonna be more computation coming about. There's also things saying that maybe you don't need as much computation for some of these things. But just to, you know, kind of, you know, add on the algorithms are increasingly getting better. You can see as we go forward in time, the ability to do certain tasks is, is clearly going towards over human performance in many cases and some of these probably should be way over human performance, right? Because how much we can understand and process as much, um, is limited. As much as we think we know so much, um, and then when we think about, you look at some of the different types of FDA approval software is a medical device, um, the ones that are more to the side of the category 3 and 4, are, are ones that are more regulated, OK? And, um, and, and some, and I'll talk a bit about some of these approval processes to try to, um, make sure that we're doing things safely. And you can see this acceleration. Unfortunately, I don't have the number in 2024, but it truly is accelerating the number of approved devices. I'm gonna talk to you a bit about some of the shortcomings of what goes into FDA approval, um, but that is certainly something we're working on as a whole. The other big part, there's an elephant in the room, right? Something has changed in what's going on with AI and we all know it, right? Chachi PT, right? Generative AI is a game changer and there's many reasons for that, but outside of the fact it's costing a lot of money, there's a lot of data centers that are being created out there it really is democratizing the use of AI. OK. So before we talked about, you know, chat GPT and all these other generative AI things, it was only the special data science geeks that were doing AI. Anybody can do it now. And you can do really, really powerful things, um, which is both a strength as well as something that if we're gonna use it and use it really, really robustly, or if we do it autonomously, right? versus a human in the loop and trying to have checks and balances in between, you know, it, um, it, it can create issues, but it, it really is, um, something that's different. And then when we think about uh what's going on in healthcare, there's sort of two elements to it. AI clearly can improve things in many ways and in fact I'm gonna submit as we go forward this can really be an assistant to us, something that makes us a better us if done well, but the other part about it, and you can see some examples here, the other part about it is the flip side is. The flip side is, is that healthcare right now is extremely resource constrained and we're being really, really forced to do this workforce alone, we're about to go off of, I don't like to, OK, a demographic cliff, OK? We're not going to have enough nurses, we're not going to have enough providers and we need different solutions to do this well. So it is an interesting convergence directly in healthcare. OK, so these digital assistants, um, I, I'm gonna underline this, and this happened to us over the past year, year and a half. We had an automation program because we realized that these tools were continuing to advance and then about also about a year ago, a little over a year ago, we started our AI program set up our AI governance and our health system, created my role as a chief AI officer, but we realized that. It wasn't enough to really think about AI alone. We had to do AI and automation together, uh, because these tools tend to, um, overlap a bit and we found that we could really talk and think about the solutions a lot better, especially within our, um, IT department and within the business when we talk about tools, hammers and nails, um, it really helped us to bring these together. So these digital assistants, you know, think about it, um, you know, in your car, um, something as simple as a cruise control, right? Um, that's an automation of some sort, but these are things that are, um, making us be able to react better, make decisions better. Um, but our, we're talking about right now assistantship, but, you know, as you go forward, maybe it's a little bit more, um, acting on its own and the robot's there on purpose, right? That might be a little more autonomous. We have examples where we haven't done it well. And this is like a case in point, right? Um, you know, many times, you know, it, it, this is going to also change. The way that we practice. We depend on these solutions more and more. Has anybody ever been in a downtime for the EHR? And what happens, right? So there's a lot of things that we have to think about. We depend, you know, more and more that these are coming into our healthcare system, digital solutions, AI enabled solutions. It also requires additional skills to be available within our technology teams, within our patient safety and quality teams. I would submit as well to our operational and medical leadership. The idea of having AI literacy, data literacy is really, really important. So, um, one thing that we have, um, put into place over the last year and we'll hopefully be launching our first cohort in April is our digital ambassador program. So we've had citizen developer programs, but our digital ambassador program is a combination of data literacy, AI literacy, design thinking, uh, to be able to help folks to be able to. Um, be engaged with these solutions and think about how we can, um, upskill folks. This is the same type of thing, um, systems engineering implementation science, human factors, but I would say quality improvement um consultants are really important in this, and, um, you know, these systems, you know, healthcare is incredibly complex. So again, just thinking about these different pieces and parts and the role that, uh, quality is going to need to play within this. The other, I think, you know, cultural and really important part is the clinician that sits in the middle trying to think about how our daily interactions may change, right? So a nurse working with the patient that has AI next to them or a digital solution next to them, and this gets right in the middle of that really, really important relationship and how do we talk to our patients about this? How do we maintain that trust, which is so vital. This has become really a point of contention. Uh, this was the first, you know, sort of glimmer that, uh, you know, nurses are making this, you know, a really important issue. And, and there's good reason for it. On the other hand, We're gonna need nurses, you know, I think, I think the, the fear is that people are gonna lose their jobs. Jobs are gonna change, but I, we, we need nurses and nurses are key and central. But just to underline it, this was actually put into, um, a recent, uh, labor contract, uh, so this new contract says, you know, protections that nurses are involved in decisions around AI that's put into care. The other big part about this is the need to have, um, more, um, roles to be able to help with these things, right? So one example includes leadership, such as my role or Phillip's role, uh, around the AI and a lot of these things that we talked about and I'll go into some detail about the things that organizations maybe need to build. Um, skills, I think are continue to be a really important thing. Although I did say you don't need as many, uh, pointy head machine learning folks, you do need some. And you need some folks that are able to guide, um, the frontline folks that are engaging in these solutions, especially if you want to deploy a solution not just for my own personal use or small groups personal use, but if you want to deploy a solution across your organization, it's really important to have these skills in place. So our overall AI program and roadmap, just to give you an idea, um. We've tried to line our strategy up at the very top to the overall strategy as well as our digital we call digital digital and IT, uh, strategy and some of these different pieces that we're trying to build, including foundational, uh, technology capabilities. Uh, we have a roadmap which I'm not going to go through in any detail here, but really trying to think about the different components of what we need to build. Another way that we're thinking about it is this idea of a learning mindset, being able to do proof of concept, thinking about how you truly deploy something and picking your technology partners carefully engaging the front line, we do have several groups that are. Um, helping us really at the front line around different problems that we're trying to solve and really trying to, um, orient the work that we do across value orientation and I'll show that some of it isn't glamorous or exciting, but it's stuff that a healthcare system needs to do. And then we're trying to, as we think about our governance and our work to not learn on our own, right? We are a part of several groups that are, um, experts, uh, well, sets of experts and we're learning from one another. So, um, I list a couple here. Valid AI is one that we've joined, um, we're also part of the IHI, uh, collaborative this year. So governance, oops. Um, just, just to kind of underline this, this was one of the first things that we set up, um, and continue to evolve on, uh, so that we're being responsible in the work that we do. I think one of the comments that we get often is that governance is slow and so we're trying really hard to have kind of a fast path for things, trying to build up, uh, patterns so that, you know, the next time something similar comes along that we're able to. Um, work through that quicker. But that's, you know, I think that's one of the things that we're still working through quite a bit. And just to give you an idea about how our governance works, we do have two bodies, OK? And those bodies are steering, which is really about managing the portfolio, having the financial pieces in place, defining, you know, these are the right things to be working on and saying, you know, you know, blessing the various initiatives. And then we have this oversight committee. And this is where we actually started and I'm very grateful that we did. Think of it like an IRB or another body that's really trying to think about those best practices. We have an ethicist on that. We have our, uh, chief quality officer on it. We have our head of diversity equity inclusion. Uh, legal and compliance, so it's really a set of folks that are trying to be thoughtful in the work that we're doing and trying to be somewhat independent in that. We also have leaders from shared clinical services, um, and, you know, and others that are making decisions that are, you know, at. Uh, at a relatively high level, uh, but helping us to steer. The other concept that we have are these tiger teams, which are groups of folks that might be in operation sometimes in quality and safety, the technology groups, and we're focusing on a problem and trying to, uh, work through those. This really is just to underline the work that we're trying to do to derive value. Um, I wish that there wasn't always this like hard ROI question. Our health system is, you know, financially, um, constrained, um, and, and that has been extremely important. So we have been putting that rigor in, um, whether we like it or not. I think it's important as a whole. It's the right discipline. Uh, but that does come up quite a bit, um, because we sometimes invest on the hard ROI before the soft ROI. OK, so I'm gonna talk a bit about some of the expert groups that are out there. Cha is one that you might want to think about some Coalition for Health AI. They have really come up with, um, really have been, um, espousing or pushing forward this idea of model cards. And those are, um, that's the idea of almost like a label that you would see for a drug. Here is what it's indicated for. Here are the adverse events that might be associated with it and, you know, uh, interacts with these medications. So it's that type of thing. Here's how well you can expect it to work, that type of thing. And then they've also started to think a bit about the life cycle around this. Now Chai as well has pushed for along with several experts, this idea of health AI assurance labs, and these, um, in theory would be around sort of that monitoring phase of AI and what they proposed was a private. Um, and public partnership that would be a, a network of, of, um, organizations that are certified in various ways. There's some issues with this. It's a great conceptual idea, but, you know, the idea of certifying a technology upfront and then not monitoring, um, you know, this was trying to fill some of that gap. And some of the challenges about it is, you know, there's several one is that, you know, how does this work when you put this in the real world? How does this work for a small healthcare system? How does this work if you're a healthcare system and you're certifying yourself? I mean, so there's some, there's some issues there and then how do you actually share the data there's, you know, so, so logistically we, we have stuff to work out, but people are thinking about this and understand that it's important to do so. Another idea which I think should resonate a bit with this quality and safety group is this idea of implementation science centers and thinking about clinical effectiveness. You could have the best model in the world, but if you don't put it in right, if folks don't pay attention to it. If it doesn't help them, you know, maybe I don't know, I've, I've, I often work with the with data scientists and I'm like, so how are you gonna use it and they have no idea. Oh, but it, you know, it tells me something but who's gonna use it and what you know so it's, it's some of those pieces and if any of you, you know, work day to day in the electronic health record, you know, something might be telling you to do something, but it may feel like a nuisance so. Is it actually going to move the needle? Is it going to change patient care? Is it gonna change it in a positive way or a negative way? We've seen all of that. So again, this idea of workflow integration in particular, continuous monitoring and adaptation, and it has to work in the local context for good or bad, we are, even if we have figured out standards say here at Barnes Jewish. You might go to even another hospital, it may or may not be the same, or if you go to another system, heaven forbid, I mean, does it actually translate? So for good or bad, we are still somewhat of a cottage industry, um, as far as how we do our practices. And then I think the last point here, just as far as, uh, thinking about where we're going, where we're at, we don't have a lot of evidence. This is a little bit of an older paper, but the amount of randomized control trials in practice that actually show that there's a positive or a negative or whatever impact, there's very little evidence at this point. So this is a slide adapted from Gartner, um, but just some of the areas that we're seeing AI and healthcare, we went through a more detailed analysis like in our healthcare system, the ones that are highlighted are ones that we're focusing a little bit more on. We spent some time talking about things, but it's so little things we like about what we do today, things we don't like about what we do today. So, um, just this was an internal document that we worked on. OK, so use case number one. I thought I'd start with this hard ROI. This is an imperative, OK. If you don't think that there's an arms race going on, you're kidding yourselves, right? So payers are using AI of the wazoo. They are also automating things and they are deny, deny, deny, right, or finding ways to actually put friction in the system. And so the flip side is what can we do at the front line of the revenue cycle, whether it's appointments, whether it's coding, whether it's billing, prior authorization, different parts of the revenue cycle, how can we, um, build up our arms, uh, for lack of a better way, and it, it is paying off in our healthcare system, but we've gone really hard in this space. The second one is that everybody probably has heard of. Maybe you have clinicians that, um, are using it. Has anybody been to an appointment where they've used AIMBN documentation? So a few, it's coming. It's honestly really good. Um, so here's two different reports, um, at two different healthcare systems just showing, um, some of the impact. Providers, you know, there's not great hard numbers that people are actually spending that much less time we do see a little bit less pajama time in some, um, instances where folks are documenting after hours, after the fact, etc. but almost across the board, clinicians feel different about having this tool. They feel unburdened. They feel like they can pay attention to the patient in front of them. Um, not have to be going back and forth or scribbling down notes, which makes people feel, you know, not as good about that interaction. So folks have really felt like it, it does make a difference. So, We think it is a, um, a competitive advantage to have something like this available. We have not yet done it systemwide. I think if your system can afford it, it's definitely a great way, um, to for folks to feel engaged for clinicians, for physicians to feel engaged. There's a bit to go for nursing, that is the next step for good or bad, nurses do a lot of very structured documentation. And getting that just so, um, has proven to be a little bit challenging, but that is coming. We really, really hope. And I know, um, quite a few of the software vendors are working on that right now. So another use case. This is one that, um, I think we've finally gotten into pilot so we got that approved, but, um, this is the first example of an autonomous AI solution specifically around diabetic retinopathy. There are some other autonomous ones, but this is truly for diagnostic purposes. For us, it's like our 4th most important heinous measure. Um, and we are, um, probably only at almost 3 stars for that. And we want to obviously move up around our compliance with it. Um, so a couple of things that we encountered as we went along the way, what does it mean for a patient? We found before that when we did diabetic retinopathy screening and we were just doing it with the camera, patients had to pay out of pocket, maybe 150. Uh, we found for this, that's not a barrier, but it took us time to figure it out. We went to all the payers in our region to be sure that we would get paid for it. We won't know until we actually drop a bill, um, and then the parts around uh, staff training, um, so. Uh, this one looks like it'll go over the line, but just to give you an idea or a flavor for some of this stuff, it, um, all of it, you know, there's a lot of detail behind it and, and getting things to the point where we'd be able to use it, but we think it's going to make a difference. It's going to save vision, uh, it'll get more referrals to ophthalmology and we will, uh, recover money on the other end as well around, uh, pay for performance dollars. The other place where, um, we may see, uh, more AI and we've started to go into this space is around the idea of, uh, interactions with patients around more constrained examples of, uh, interactions. So patient education is an example. We have one where we're, um, using it right now around colonoscopy preparation. Some of the preoperative phone calls, just collecting some of the information up front. We've used that to sort of start up the visit and then, um, then hand it off to a nurse. Um, I give this example of Hippocratic AI not to, uh, endorse it or not, but this is, um, an example of one of them that's doing it with voice interactions and that's the one we're doing with colonoscopy. Since we're talking about quality and safety, I thought I would talk a little bit about the data abstraction piece. I don't know if, um, uh, data analytics for quality sits within the quality function but there's a lot of opportunity there as well in sort of solutions that will, um, make analysis faster, make data, uh, uh, engineering a bit faster but just talking about, um, quality data abstraction and this is an example within our organization. So we took all our data abstractors and brought them into a system function and again, you can sort of see the timeline and really started to look at how do we think about, um, the work that's being done and how do we decrease the amount of manual abstraction that's needed. Um, and so we started that you can see all the way before pandemic, and then we've continued to really push that, um, forward. So the idea of using automation data abstraction extraction, sorry, uh, to automate some of the data that comes out, um, the bottom part is actually offshore, so. Um, that's a, I don't know, oil rigor, but, um, the idea of, you know, how do we think about workforce a little bit differently for the pieces that we can, uh, do work elsewhere and now we're starting to get into, uh, new products and I will submit as well, registries are changing so Iris is just and Aqua are two examples where data is being. Extracted automatically. There's no abstraction that takes place. ATS right now is looking at, um, a solution that would be similar to STS, uh, but we use a lot of AI and, uh, very, uh, really decrease the amount of abstraction. And even CMS is thinking about, uh, digital quality measures, automating all of this. So we are in this phase of continuing to transform. And again, it's not just AI, it's AI and other, um, and other automation. On the horizon, and maybe even being used right now. I don't know, um, if you guys are using virtual reality for training, for cognitive behavioral therapy. So really starting to be, um, some products that are out there. We've been using it a little bit for our, uh, clinical training sometimes with the technology for, for trying or doing certain very specific tasks and robotics is going to be more and more part of what we do. OK, so I'm gonna shift to research. And I will submit that this is a different animal or a different type of research and I'll start with sort of the, uh, what motivates this, OK? So it's about translation and the drippy faucet. We do so much, we invest so much, but so little gets to patient care. And when we put things into patient care, we often, I mean, I'm gonna be honest, we often don't know how well it actually works. We spent a lot of time on like getting it there, but then afterwards, you know. We don't always de-implement either and this just underlines this a bit further, um, all the money that's spent in research, very little to translate it or to learn or to do research directly in, uh, getting it into care and learning it if it actually works well and then all the money we're spending on healthcare. So that is sort of, um, just to underline this, this is why this concept of a learning health system came into place. This is one definition, but it is this idea of doing something that is the best of clinical care and the best of research and you're building a different animal and you're actually bringing it together in a parsimonious way. It's got to work. It can't like bog down operations. And behind it is the idea that you're thinking about it from the beginning. How will I collect the data? How will I, you know, have this feedback cycle and set up the work that I'm doing and I do think this is a huge opportunity for bringing together quality improvement consultants who think a lot about this, have improvement science at the front of their front of their mind, right? And implementation science which uses slightly different terms and collects actually similar things, you know, something like a survey could you take that a little bit further and you know make generalizable knowledge so this this is what a learning health system, uh, is about again, it's a different animal. It's not the same research that you've heard about and it's not quite the same health care so both have to meet each other. It's a cultural thing. We founded a Center for Learning Health System Sciences right in the middle of COVID. It really ended up being, um, a bit of, um, the, the catalyst behind it. There were a couple other things, but, but this was a lot of it. um, this shows you sort of the the units that we have behind it, but you know it starts with evidence there's a lot around data and innovation and this is slightly old there's something also called PER, which is around, uh, qualitative analysis. And this gives a little bit of a description. Then this gives you sort of a picture of some of the folks. One thing about how we formed this is it was a collaboration between the medical school and the school of public health, which had our, um, health policy management. So a lot of, uh, different interdisciplinary, uh, folks coming together. Just give you an example. Joe Kop miners up top with the Rapid prospective unit and Debbie Pesca. Joe is the head of biostatistics. Debbie is an implementation scientist and, um, also, um, a practicing pharmacist, just to give you kind of an example. So we've been thinking a lot about how you implement AI into practice. This is just one example where we've been working this is funded through the state of Minnesota, um, around thinking about risk management around AI and, um, not just thinking about performance or something, but it's a performance in the setting of the particular decision that, um, uh, a piece of AI is going to be looking at. The other thing that we do is, um, in putting different AI into practice, evaluating it, and I'll, I'll show you a couple of examples here. Oops. Uh, normally this has an arrow that is right down here. You want to be up here. We, we were down here, OK? And so that motivated us to look at rib fracture. This shows you the team that has been working on this across, uh, 4 different institutions. We first put in an evidence-based care care map so we would just have consistency of care and one thing we discovered, and that was this top part, and one thing we discovered as we did that is we were having rib fractures sometimes that were not fully being recognized as being multiple. And so any of you that do trauma care know that, um, when you start to have multiple patients are at risk for something called a flail chest, maybe needing to get intubated, that type of thing. So, um, and this kind of gives you an idea of how, um, we think about sort of that whole life cycle. We were really focused on, uh, this ability to sort of detect the rib fracture. This is number one. We developed something called a federated learning, uh, sort of network. So these four institutions that I showed you before, and the idea behind Federated learning is, uh, you know, we only have so many examples of, uh, the rib fracture. It's probably, it's actually a pretty common one, so it's probably not the best example, but we wanted to stand this up and show that we could do this type of work together. Um, so the idea is that each of the organizations keep their own data. It's a special type of AI that we can take advantage of the data at each site, but not have to deal with all the issues that we have with privacy between the sites. This is another example of the work that we and so just to go back on that, uh, we now have this, uh, diagnostic model running in the background, uh, it has good performance and hopefully we'll be able to deploy it soon, but we put that in the Epic cognitive computing platform and has a combination of images as well as some other, um, structured data. So it's a multi-modal model and then we'll verify the performance and put it into practice. So we're excited about that. This is another example. It's called My Surgery Risk, um, developed at the University of Florida and it's this idea of external validation. So a really good model, it gives you sort of risk before surgery of, you know, is this patient going to have any of these sort of 8 outcomes I think and what we are doing right now is seeing is this transferable? And what we found is sort of, and we actually, um, are trying to do, sorry, we're trying to do some best practices here. We're using something called the OMA common data model. The idea is that we map our data kind of similarly represent it similarly, so it should actually work between the sites. But along the way, we found out we actually mapped our data a little differently. Um, and so I think the, the message here is these are really promising things, but there's a lot of work to be done, especially if we're going to transfer things and be sure that, uh, the performance stays. Those were some examples. Here are some other examples, but we continue to really focus on AI and data science funding. And then just one last example with this precise AI initiative. So this is through ARPAH. Are folks, have folks heard of ARPAH? Yeah, so it's modeled after DARPA. It started maybe 2 years ago, um, and the idea is to do work in healthcare and discovery in healthcare in a slightly different way with public-private partnerships, um, and, and doing it like on a more sort of contract and accountable basis versus a grant where I give you the money. These are all contracts. It's. Usually a little bit lower barrier to entry. You just do usually a shorter, it's like a 6 pager if you have a proposal. This was a call through their precise AI initiative and the idea is to be able to monitor AI better, to be able to, um, have, uh, clinicians interact with the AI better. That's number 3 over here. Um, and so we're the lead institution on this proposal. We don't know what will happen since it's funded through the government, but there's, um, We have some, uh, industry partners Nvidia, GE Healthcare, etc. that are involved with this. And our two use cases, one which you saw was my surgical risk, and the other one is an FDA approved algorithm around, I think it's pneumothorax, so. All right. So I'm just gonna end here, but really, we're on a road. And, you know, really just to underline, you know, I think thinking about where we're going with AI and laying that foundation is incredibly important. I hope this sparks some, uh, curiosity and interest, and I'd be happy to answer questions now. And I don't know how much time we have. Plenty of time. OK. I can't believe there's no questions. I think there's mics. Um, or question up front, yeah. So the question is, is, um, what are other countries doing right now? And I think the answer is, is they're all at different places right now and I would say the US has has gone pretty hard. The European Union has done some stuff around data privacy and trying to put some regulation and best practices in place, certainly like keeping an eye on that, especially around FDA approval is really important. But I, I don't have like specific case studies, um, to be able to say, you know, this country, this is where they're at, but thank you. OK. The microphones don't work or they're not on. Yeah, OK, thank you. Oracle and Epic cartoon. A lot of work with AI there's also a lot of startups and vendors that are developing AI and then health systems are also having their homegrown AI. What do you see that looking like in the future for what is homegrown versus what is a a purchase product? Yeah, um, so I, I think most folks heard the question, so this is the forever technology debate. I think one thing that we're so um. Using enterprise solutions, um, certainly when, as you have scale across many hospitals, etc. is, um, often, uh, a more, um, More prudent thing to do. However, um, sometimes because these are enterprise solutions, I don't know, um, they have something in so an example with Epic right now there are some capabilities around, let's say note summarization. That are able to be deployed, um, within your electronic health record, um, the performance of that versus maybe a startup that comes in and has the ability to maybe just focus on that problem versus, you know, Epic says right now as a for instance, that they have over 100 use cases that they're going after. Um, so the quality of the two debatable, right? And, and maybe that, that startup is a little bit better. Now, what's gonna happen to that startup? Hard to know. Sometimes that startup has vaporware as well, um, and I mean it's kind of a joke in the health IT, uh, world, um, that you know, if I go to hymns, I go to Vive I go to, uh, these various places and I start talking to the vendors when you actually get under the hood, oh well this is developed, this is actually in production. The rest of this that we told you about, no, it's not there. So. It, I mean this has been going on for a long time with health information technology and I would say technology in general um but having um really thoughtful, uh, a thoughtful look at this is really important, um, just to underline with Oracle Cerner and Epic. Epic has gone really hard. They worked very closely with Microsoft up front, realizing now that and and open AI and have realized that maybe sometimes using less expensive technology can be helpful. So that's one thing. They they also recently changed sort of their cost model so a lot of organizations are trying to figure out should they. Uh, pay for what they use versus should they, uh, go all in and, and be able to have unlimited like a buffet, right? Um, so that's something that folks are thinking a lot about and if you do that, are you then really committing to something like that? Oracle Cerner very interesting sort of stuff that's going on right now. They are, um, going to, um, supposed to be a digital first platform AI enabled. There's a lot of compliance things that need to be put in place. So they're in the middle of all of that. Um, we have a set of sort of evaluation criteria that we try to use for any of these, um, solutions. We try to be a bit disciplined in the idea of capabilities like what is this supposed to do? How does it compare to other things that are supposed to do what you know this is supposed to do and then you know I think the other part is there's a lot of excitement and you don't want to quash that excitement and sometimes people come with like. Probably the best, um. Solution that's out there, you know, and you know we have medical leaders that are being approached of course operational leaders so how do we have a good rational discussion and then I think the last piece is sometimes you can try things and if you don't have to put a lot of technology debt in it, there's no harm in evaluating it. It's when you have to put like lots of time, effort, money um into it that you know that that could be a a big issue, yeah. Genevieve, thank, thanks for a very thought provoking talk. Um, I was looking thinking back to your slide of Buckminster Fuller about, um, not. The road to hell, yeah, sorry, um, and there's a certain sense that I think a lot of us have that AI is being bolted into our existing health care. System so substituting for different steps or pieces of it, um, can you comment a little bit on the potential and whether you've thought about the the potential for completely and radically redesigning the way health care is delivered? Yeah, it's a great question. So I mean, Doctor Dunnigan, I think that Well, so I'll give an analogy up front. Um, there is this idea when we talk about technology of being, say, digitally or cloud native or So if you, it, I think, and, and this is something we deal with in quality improvement all the time, right? So you take a process what's the process? Let me measure it what you know, what do I what do I need to improve, right? But if you come in a completely different way and you design it, yes, it can completely change it and, and so that's where the design thinking really comes in, um. I don't know if just knowing where we are as, um, a healthcare system in in in the US right now if it, if we're up to a complete shift, I could see areas where folks are dedicated to innovating and discovering like a different way, let's say a model primary care clinic, right? That's a great example of where you could change things radically, um. But it, it would take literally, as opposed to, um, you know, an improvement here, an improvement here, it would be shift. And, and, and honestly, that's what true transformation is about. Um, I don't know if we always have the, the will. Um, the, the leadership always uh just broadly to, to think in that manner, but certainly there is opportunity out there. We, we are opportunity rich. I mean, health care is as a whole, yeah. There's a question up front. I've been doing a lot of uh research into what goes into making the AI. Thank you. Uh, what goes into making the AI training data sets, and I'm worried about the garbage in garbage out because I heard another professor, uh, discussed the fact that they had taken their trial paper results and put in dummy data just to get the language proper and formatted and you know all that stuff is a trial run. Well, the problem with that is that that bad data that they had in their paper goes into the big data set that creates the artificial intelligence and it's bad so who's going in to validate and make sure that the data that's coming back out when the question is asked again is correct and valid? Yeah, it's a great question um. So I'm gonna answer that in two ways. Um, the first is that there are a few things where not having perfect data, having approximate data honestly works, and there's a reason that we're going in this field in AI in general, we're going towards a lot of synthetic data. Because in some cases, if you were to like say, look across all patients and, and get patterns out, the exactness probably doesn't matter for certain things. OK. Now the flip side is, is that for other things, you are absolutely right. So the point that if you have garbage in and it's a perpetuation of something that's not perfect for a task or an item that that needs that needs that, that's a problem, um, you know, and there's multiple examples also just specific to clinical care where. I don't know, um, here in the ICU and you know, let's follow the patterns that clinicians do and maybe they weren't getting great care like in a particular set of instances and the data that's being fed is is not good, um, even if the patient survived or, you know, um, it maybe wasn't optimal care so I. So there is a lot of that. Now, the things that we have in place are things like. The investigators are the people that are doing the work themselves that are building the model, sometimes companies. Um, peer review when we're doing research, but we know that has fallibility as well. So this, this is a real issue, and I don't think we have the answers around that yet, um. But those are the, those are, I think most of the stopgaps and then FDA approval, FDA sort of monitoring. And then there are a lot of things that fall outside of FDA. So I didn't talk in any depth about the FDA FDA approval process, but FDA approval around educating around the colonoscopy prep. I don't need FDA approval around that. It's not a medical device. I'm supposed to test it. I tested it internally. Is it good enough? Did we get the edge cases? That takes a lot of thoughtfulness and we may or may not actually know at the point that we deploy it. Um, so this monitoring thing is, is really, really important, yeah, in testing. Something over here. Yes. There you go, um, I had a quick question about, um. Like legal risk and. Yeah. When implementing AI, are you finding that the legal and ethical barriers are pretty significant to getting your projects off the ground? So it's a great question, um. So what we've done with, um, all the AI solutions that we've put in is upfront, we try to define the amount of risk and we put a different level of sort of oversight around that. So something that is patient care, decision making, we go into an additional level of depth. We try to specify things around, um, Around how the, you know, all the steps that are needed before deployment, um, so the things that we're dealing with mostly around regulatory and ethical, um, so for instance, when we put the scribe solution in, it was a lot around the patient interaction, um, and what we needed to tell patients before we, um, would use a solution like that. Um, we have had discussions, um, as the previous person that asked about the data, we've had discussions around, you know, how does this, uh, model or solution work for a particular subpopulation? What's the performance look like? And we've actually added that to, um, our, our monitoring plan, and we, we look at the performance around that. We do that mostly around. Uh, populations for the particular area where we know that there is a difference in, uh, there's not equity in the outcome. Um. And then I think the other thing that we encounter, we've encountered a bit around. Uh, around ethics and regulatory is the use of our data and, uh, there's a lot of companies that want to monetize and, uh, use, uh, our data to actually make their solutions better. And so we have a lot of questions around that. That's a, that is a significant piece around what we, uh, look at when we're looking at a technology solution trying to understand what will happen to our data. OK, one more question. Or not, OK. All right. Well, thank you. This is an honor. I really appreciate it.