AI has the potential to enhance quality across the cancer care continuum. Learn about a user-centered design approach—incorporating feedback from clinicians, patients, and caregivers—to create a dashboard that communicates AI-driven prognosis information for patients with advanced cancer. The second demonstrates how AI supports the implementation and scaling of a cancer hospital-at-home model, addressing health equity and access challenges in rural areas.
My name is Brandy Beal Smith. I'm the director of patient safety, quality and performance improvement for Washington University. Uh, thanks for sticking with us. Um, our next presenter here is Doctor Liz Sloss. Uh, she's the assistant professor at the University of Utah College of Nursing. She's also the assistant director of the dissemination and implementation Science Corps at the university's Clinical and Translational Science Institute. She's a, as mentioned a research assistant professor at the University of Utah College of Nursing. Her work focuses on optimizing the implementation of evidence-based digital health interventions in routine health care using rigorous implementation science methods that enhance, I'm sorry, enhance reach, adoption and sustainability. She earned her PhD in nursing from Virginia Commonwealth University and an MBA from Georgetown University. Welcome, Liz. Mhm. Thank you, Brandy for that introduction. As Brandy said, I'm Liz Floss. I'm uh an assistant professor at the College of Nursing and I'm an implementation scientist, which is basically a fancy way of saying that I'm interested in how we take this thing and get people to use it. So in this context, I'm really interested in how we leverage and harness all of these digital health innovations and get them into routine clinical care. So today I'm gonna focus on a specific area of healthcare, cancer care, and we'll talk about how we can leverage AI across the cancer care continuum to improve patient outcomes. What There we go. I'm sorry about that. So just before we dive in here, I have no relevant financial relationships to disclose. And learning objectives for today's presentation include identifying AI-driven solutions to enhance patient care quality across the cancer care continuum, as I said, we'll evaluate the role of AI in addressing health equity and specifically improving access to cancer care for rural. Communities, um, and then I'll talk a lot about implementation and strategies that can be used to promote future integration of AI into cancer care and really beyond cancer care. So hopefully relevant to those in the room here. So to start, I want to ground us in what I mean when I say across the cancer care continuum. Cancer care ranges all the way from prevention through survivorship and palliative end of life care, and I'll spend a little bit of time hitting upon each of these steps and how we're seeing AI being used, um, in, in these, these different realms. There's this nifty graphic that I pulled from this article by Lau at E that really focuses on AI specific applications across cancer care. So we just saw the slide where we talked about prevention all the way through survivorship and end of life care. This distills in a little bit more granular detail, AI specific applications in those different realms. So to get us started, I want to focus on risk screening and accurate diagnosis and how AI is being used today. Uh, as my colleagues earlier alluded to, there are some FDA approved devices that use AI technology. In this case, um, there's this, uh, device called Derma sensor. It's a skin cancer screening device that allows primary care providers to screen suspicious lesions and give feedback on whether or not they should make a referral to a patient to see a dermatologist. And this, uh, device has been shown when it's used by primary care providers to increase management sensitivity and diagnostic sensitivity, as well as decrease false negative referrals by 50%. And I do want to say I'm not endorsing any of these products. These are just examples of how AI is being used today. The next application, um, I'll mention here is with using artificial intelligence during routine colonoscopies to detect precancerous polyps. Uh, this example comes from a VA deployment of using this technology, uh, to detect, uh, polyps for colorectal cancer or cancer screening and diagnosis. Using this device in the VA was shown to increase polyp detection rate by 14.4% during routine colonoscopies. So another example of how we can use this technology to improve cancer screening and diagnosis. And finally, I'll mention here an ongoing study. This is going on in the UK through the National Health Service. They're getting ready to launch the world's largest trial of AI guided or AI aided breast cancer diagnosis. They're planning to analyze at least 700,000 mammographies to assess the accuracy and reliability of an AI model to make diagnosis. So moving along the cancer care continuum, the two areas I'd like to talk about next relate to drug development and customized treatment or patient-centered treatment for cancer care. So the first example I'll cite is an AI algorithm that was developed by, uh, Ruin and colleagues, um, with with help from the National Institute of Health, and they looked at predicting patient response so how individual patients would respond to immune checkpoint inhibitors, um, for cancer treatment, and they were able to predict patients that would respond well to therapy and patients who would not respond well to therapy. So we can use this type of technology to say, you know, this patient based on their unique characteristics are more likely to respond versus, um, patients who may not respond as well. Uh, the last example I'll share here, this is an image of a supercomputer at Argonne National Laboratory in, um, in Illinois, and I think what's really neat about this is something that we don't see a lot when we think about these AI algorithms and it's really the sheer computing power that is required for these algorithms to run in a lot of instances. And what this computer is or supercomputer is working on is using AI to identify novel targets for cancer therapy. Um, and you can see here what's pictured, these, uh, blue and red cables are actually tubes full of water that cool the computer, so it doesn't overheat when it's running these algorithms. All right, so I'm gonna slow down with some of the examples here and talk a little bit about some of my own work and really focusing on how do you develop an AI algorithm and take that all the way through the translational spectrum to be implemented into practice. And the project that I am going to talk about right now is called Data analytics to improve end of life care, specifically for cancer patients. Here are my team members who worked on this project with me and we had funding from Hitachi for this work. So first, a little bit of background, and I should say, so we developed a predictive model to predict 6 month prognosis for patients with advanced solid tumors. This is actually a pretty common application of AI in the cancer space. There are a lot of these algorithms that exist that predict, um, prognosis, and it's important because patients with advanced cancer often receive intensive therapy at the end of life. Um, however, we know that earlier referrals to palliative care and hospice care are associated with active with accurate prognosis and can often improve quality care or quality of life at the end of life, and patients are less likely to choose aggressive care when they know they're approaching the end of life. So we wanted to create an algorithm that would convey to clinicians accurate prognosis information. So clinicians could then use that information to have goals of care planning conversations with their patients and help them make, uh, decisions at the end of life that were congruent with their patients' beliefs. So we had 3 different components to this project. The first, as I mentioned, was to develop a model to predict prognosis, um, specifically for patients with advanced solid tumors. The next part was to design a clinical decision support or visual tool that communicated the findings of the machine learning model. And so I actually want to pause right here because I think this is a really key part when we talk about AI and healthcare. Um, as Zach talked about and Genevieve talked about this morning, a lot of models that exist are predictive, so they're, they're designed or developed to alert a clinician to something that's going on with the patient. That's the first step, and it's an important step. It's the first step in this cascade of how do we communicate. This really important information to clinicians. And so the second part of our project really focused on how do we take this output that the model is telling us and communicate it in a way that's meaningful to clinicians that enhances trust and actually causes them to act on it in in their clinical practice. The third part of the project was a little bit unique. We wanted to assess patient and caregiver perspective, so knowing that patients and their caregivers would receive this information, we wanted to talk to them and get their feedback on what, um, what was important to them, how they wanted this information presented to them, etc. So I'm gonna walk through each of these steps next. So as I already talked about, we developed and validated a machine learning model that predicted 6 month mortality, and we, the way that we thought about this input was we wanted to risk stratify patients. Um, so in terms of making meaning of the model output, we wanted to be able to tell clinicians your patient has a low or likely chance of being alive in 6 months. So here are some of the features that went into our model. We actually developed three models. The first was a full predictive model that included a 111 features. We also developed a limited model that had 45 features and another limited model that had 23 features. And just to give you an idea if there are any oncology clinicians in the room, some of the factors that went into the model that we found to be predictive included things like age, um, primary cancer type. Uh, and then a lot of the blood values as you might imagine. And so just to highlight again, some of the differences when we moved from the 111 feature model to the 45 feature model, we kept a lot of the, um, white blood cell count variables but dropped things like monocyte count and neutrophil count. Those were things that we included in our full model but were not found to be overly predictive. So we dropped them when we moved to our 45 feature model. From the implementation perspective, and Zach alluded to this as well, the lower number of features that you have, typically the better, right? So in order to implement this model into the electronic health record, we need to be able to pull all of this data for. Given patient and run the model and develop the or come up with the output. And so the more variables that you have, the more computing power is required, the more time intensive this there is to be able to write the code that pulls these variables from, from the electronic health records. So in a sense, simpler is better when it comes to, um, implementing these, these models. Really quickly, we ended up moving forward with our 45 feature model. It had pretty robust performance. So one of the common evaluation criteria to look when you're looking at, um, machine learning models is something called area under the curve or AUC. Our model had an area under the curve of 0.8, and that's considered pretty robust. Anything above 0.8 is, is a pretty strong model. Um, so we decided to move forward with the 45 feature model for our, uh, testing and implementation. So that was the development of our model. Next, we designed, we wanted to design a, uh, clinical decision support tool or visual tool to be able to communicate findings of that output. So what we did was we engaged oncology clinicians in several iterations of user-centered design to come up with what oncologists wanted to look at when they were looking at the output of these models. So we initially started with a low fidelity wire frame that's shown here on the screen and you can see here this is pretty rudimentary. It's not overly graphic or anything like that, but it shows you the components of what we as the research team and initial feedback from oncologists thought that they would want to see when they were looking at this prognosis information that the model had predicted. Um, so I'll note that they're, they wanted to see patient information such as, um, line of treatment and then other predictors that went into the model. Um, we showed this initial wire frame to several of our oncologists, got their feedback, and advanced our wire frame a little bit more based on their feedback. Um, so we identified additional variables that they wanted to see. And, uh, again, the, the variables you can see the inputs into the model like the labs that they wanted that clearly displayed, um, in, in the visual. So after we made those changes, we got additional feedback and we started to build out a more high fidelity prototype. So this is a little bit closer to what the the tool that we developed looks like today. Um, and you can see here again the characteristics, the patient characteristics on the left side of the screen. And then, um, the chances of being alive here displayed for the patient. You'll notice on the bottom here, we had the top 4 factors that were contributing to, um, to the output that the model was giving, so the, the prognosis information. And Zach talked about this concept of explainability or kind of trying to shed light on what's going on behind the scenes, how these models work, like giving some glimmer into that black box and that's really what we were trying to convey with providing this information to clinicians where things like albumin level was a really strong factor and predictive in, in the prognosis information that clinicians were seeing. So after another round of user centered design, we ended up with a tool that looked like this, and this was more or less the final product, um, that we ended up with. So I'll walk you through it here. On the left again, we have our patient characteristics. When if we go back here actually for a quick second, one of the things that we got feedback on at this level was that it's really hard to understand what the model is trying to communicate. And so we, instead of having just that Kappa Meyer curve or the survival curve, we had a kind of a big takeaway box that you can see here. Let's see if this light works. Yeah, there. Um, so that, that is kind of the big takeaway for oncology clinicians to understand what the model is communicating. They still liked seeing the survival curve, so we kept that in the model. And then we also included historical information here. So previous model runs based on where the patient was in their cancer treatment journey. I should add this model was designed to be used and predictive at the point of deciding whether or not to start a new line of therapy. So you can see here historically, after when the patient was getting ready to decide to start the first line of therapy, they had a 92% likelihood. Of being alive in 6 months at their second line that was 90%. And now unfortunately, as they're getting ready to make their their next treatment decision at the start of potentially a third line of therapy, their 6 month prognosis is predicted to be about 13%, so a little bit lower. Um. The other thing that oncologists thought was really important was the so what? What do I do with this information? What does it mean and what, um, what should I consider when I'm caring for this patient? And so we included in the top right here this recommended actions box and I didn't show previous screens, but this recommended actions box does change based on, as you would imagine, based on the predicted prognosis information. Um, so in this case where the likelihood of survival is low, things like discussing goals of care, connecting a patient with, uh, supportive care services, reviewing the choosing wisely resource were all things that were recommended, um, in terms of evidence-based practice, uh, for, for these patients. So some key takeaways from this stage of the project where we talked with oncology clinicians, um, I mentioned this idea of explainability and Genevieve and Zach talked about in their presentations, it is in the literature that clinicians do want to know how these models work. Interestingly, we found that it actually isn't as important as the interpretability component meaning what do you do with this information and what's actionable on the part of the clinician based on what the model is telling you so it's not that it's not important, it's just that the, you know, what do I do with this is, is more the the oncologists who participated in our study felt that that was more more important. Um, the other thing that I'll mention is specific to this realm, um, regarding implementation, our oncologists expressed very strongly that that this information needed to be used and presented in the context of a conversation between the oncologist and their clinician. So that this could not be pushed to the patient without having the oncologist be the one to convey this information to the patient. As you might imagine, this is pretty sensitive information. And so having the clinician be the one to share this information was, um, was really important. So we took that information and again as I mentioned, we thought it was really important to hear from patients and oncologists about or sorry, patients and caregivers about what they thought about all of this. So the third part of this, uh, project was to assess patient and caregiver perceptions regarding the communication of this prognosis information. And we also wanted to conduct user centered design with our patients and caregivers. Um, so was the visual that we developed for oncologists, was that also suitable for patients and their caregivers to see? And so we, uh, engaged with 20 participants. We conducted two rounds of user centered design and we conducted specifically interviews and focus groups, and I included the patient demographics here participant demographics here because I did want to show we had a breakdown. Of, um, participants who identified as cancer survivors as well as patients who identified as caregivers and several participants who identified as both. Someone who has had cancer in the past and was also caring for someone with cancer. And we conducted, uh, as I said, two rounds of user centered design. The way that we presented this, the visual to the patient was in a recorded video. We recorded a fictional conversation between an oncologist and a patient and used this visual to kind of tell the story. And so patients were watching a video, a skit essentially of an oncologist using this tool to communicate prognosis information to the patient. Um, again, I, this is basically the same thing that we saw before. The difference is that the 6 month chance of survival for this particular, uh, starting this treatment line was 82%, and you'll notice here the recommended actions are different based on, based on that. So we found after this initial round of user feedback that overall, their patients and caregivers had positive perceptions of the tool. One of the quotes that supported this theme of value of the tool for decision making, um, our cancer survivor said, I know several friends who have gone through end of life things, and it's a really confusing place to be. I think this tool would allow them to have a more concrete view of what was really possible so that they can make better decisions about what they actually wanted to do with their time. We also heard from another cancer survivor that seeing this information like this perhaps provides the patient with some sense of agency, especially in our case where the treatment was destroying quality of life. We also heard things that the patients and caregivers didn't like. And so this is an example, um, of that, uh, cancer survivor shared. Honestly, I think I couldn't. I was probably still stuck on low. Low is what stood out and stuck with me. So we can, we took into account all of the participant feedback and made some changes to how the, the visual looked and how it would be used in routine cancer care. Um, so based on that last quote in particular, we decided that when. You know, unlike oncologists who are probably used to seeing all of the information displayed in the electronic health record, when patients saw all of that information, it was overwhelming, particularly when the prognosis wasn't good. So rather than that be the first thing that patients see if they were looking at this tool with the oncologist, we decided to cover up the different components of the tool tool and add a little bit more information about what was going on. um, so you can see here probably the biggest change is that the the most of the visuals covered in these boxes, um, that, that describe what's going on underneath, um, the box essentially. Uh, the other thing that I'll mention here is that again as a team and knowing why we were doing this, um, the recommended actions part was the most important takeaway from our perspective of like what you know now that you have this information, what can you do? And so we actually created a separate screen that would display the recommended actions information by itself so there weren't any other distractions by what was going on with the tool. So we made these changes and we conducted, um, subsequent rounds of, or one more subsequent round of user centered design. And this time we really focused on implementation. So how could this be used by oncologists in the context of routine care delivery? And the first thing that emerged was similar echoing our oncology clinicians almost exactly was that the patient's oncologist should be the one to share this information and guide the discussion about prognosis and next steps. Um, one of our cancer survivors said, I think the tool can help. I just think you wouldn't want the tool to be the one sharing the message. I'd still want to have the doctor being the one share the message. Uh, another implementation consideration that we took away was that this, the tool can springboard information seeking so it can actually be a, a mechanism to reinforce, um, well, I'll say that it causes the patient to have more questions, um, and so that reinforces the need to have the tool be used in the context of a conversation with the oncology provider so that they can be the one to answer the patient's questions when, when they arise. Another thing that we heard over and over again was that the receptiveness of the tool and whether or not this information should be shared with patients is very much dependent on the patient. Um, it's dependent on the individual and if they want this information or not. So one of our participants who identified as a cancer survivor and caregiver said. It depends on the patient, like finding out whether a patient wants to know their prognosis. I've met people who even at that point would be like, I do not want to know any of that information. I just want to try all the things possible. So when we think about implementing, it's important to understand like this may not be appropriate to use in all contexts and so how can we identify the patients who would want this information and benefit from having this information? And then finally, um, one of the, the last implementation considerations, uh, that emerged was that this prognosis information should be shared early in the course of diagnosis and treatment, um, with interested patients, rather than. Waiting only until prognosis is low. So if you think about the visual that showed the different lines of treatment, patients or participants in the study said that they wanted to see their prognosis at each of those treatment decision points and not only that third treatment decision point when, you know, the, when the prognosis was low. So our key takeaways from this work were that, um, patients and caregivers perceived the interface to have value so that there there was value in showing this information to them. Uh, they identified challenges, information gaps, and implementation considerations that were above and beyond of what we elicited from our oncology interviews. And the big takeaway for me is that it really underscores the importance of incorporating patient and caregiver feedback when you're refining these AI tools and how that information is conveyed um because they're impacted too and so um you know Zach and I were talking about at the break about one of his examples uh related to the opioid screening and. He's getting ready to do this a similar thing where he's gonna talk to parents of, um, newborns and it's just you can learn a lot from, um, everyone who's involved in the process from the clinicians administrators all the way to the patients and their caregivers. So I'm gonna transition to thinking into the future, and I love this road analogy that we've had several times today. Um, one of the things that, uh, I think we're we are at this precipice with AI and and impact on health care, and there are certainly a lot of other things going on, but one of the things that I've spent a lot of time thinking about recently is how we can help use technology and artificial intelligence specifically to help close some disparity gaps and in Utah. Um, one of our big disparities is distance. And so I'll talk, spend the remainder of the time talking about distance as a disparity, some of the things that we're doing to overcome that in the cancer care realm and, um, how AI can be used to enhance cancer access for all. So Genevieve mentioned her team had received an ARPAH award. We recently received one as well to, um, expand our rural hospital at home program through Huntsman Cancer Institute. Um, ARPAH is a kind of a unique agency, um. Um, a unique funding mechanism. It stands for Advanced Research Projects Agency for Health, and it's modeled after DARPA, as Genevieve said, and it's really designed to fund high risk, high reward research. And it so AI is actually very well suited for this type of funding mechanism. And we specifically received funding as part of the paradigm program. And the paradigm program, as you can see here on the slide is platform accelerating rural access to distributed and integrated medical care. It's somewhat complicated. There are a lot of different performers or different groups that have been granted this award across 5 technical areas. I've listed the 5 technical areas here. The first is the clinical use case so demonstrating this platform. To deliver hospital level care in rural communities. The second technical area is the care delivery delivery platform itself, which is a rugged electric vehicle that's designed to be a hospital on wheels, and I'll show you in a, in a few slides, um, what the, the program officer has envisioned that looks like. As you can imagine, so if you have a hospital on wheels, you have a lot of medical devices, laboratory equipment, X-ray, that sort of thing. So technical area 3 is an integrated platform designed to pull all of that information together and push it into the electronic health record. The 4th technical area are teams that are are tasked to build is a miniaturized CT scanner, and then the 5th is intelligent task guidance. And I share this just to show, just the, the power of technology in, in developing this, and I'll, I'll touch upon some of these as, as we move through the final slides here. So the technical area that we were selected for was the clinical use case. So we are basically demonstrating clinical effectiveness and delivering high quality hospital level cancer care to rural communities across Utah. And so that's, again, that's what we're tasked with here. This, uh, this is the care delivery platform, the electric vehicle that's designed to go into rural communities and deliver hospital level care, um. To these rural communities, and I keep saying rural communities, but I think it's helpful to take a step back and think about some of the access challenges that these rural communities face. So here's a picture of the Mountain West with Utah highlighted in orange, um, and I mentioned specifically this project is funded through Huntsman Cancer Institute. We are the only National Cancer Institute designated cancer center in the entire Mountain West region. So if you're diagnosed with cancer in Idaho or Nevada, or even parts of Utah, Wyoming, Montana, in order to access an NCI designated cancer center, oftentimes it means coming to Salt Lake City. Um, and so on the right here you can see a blown up version of Utah. We have Carbon, Emery, and Grant County are, are, uh, pilot counties for this project. It's anywhere from a 2 to 5 hour drive from these counties to get to Salt Lake City to Huntsman. Cancer Institute and it involves going over a mountain pass and you can imagine this time of year um there are snowstorms that we have hopefully we're getting more snow next week um but it can cause tremendous burden on patients and access challenges to be able to go over the mountain pass to get care and a lot of times cancer care. You're going frequently for infusions, other diagnostic tests, um, and particularly if you have complications, um, you want to be seen by cancer specialists. In Carbon, Emery, and Grand counties, there are two hospitals. There's a 17 bed hospital in Moab in Grant County and a 39 bed hospital in Carbon County. There are no oncology providers. So again, that idea of getting specialized cancer care, um, requires that travel. And so that's why we are working on bringing cancer care to these communities. I've talked a lot about this, but just to hit the point home, um, cancer facilities are often located, and this isn't unique to Utah, right? It's cancer, um, care facilities are located in urban areas, rural areas are considered cancer care deserts. Cancer is very much a centralized model of care. Um, it's not distributed and so the goal of our paradigm program is to create a distributed model for cancer care. Um, we are specifically. Uh, moving forward with two use cases, the first use case is to deliver acute and subacute care to patients who are receiving cancer treatment, um, in, in their communities or in their homes, ideally. And then the second use case is distributed clinical trials or chemotherapy infusions. So in order to participate in a clinical trial, oftentimes clinical trials require more frequent visits. Any time that a patient has to visit the Cancer Institute, they're looking again at that 2 to 5 hour drive. And so it precludes a lot of, um, individuals in these communities from participating in clinical trials, and we want to help bridge that disparity and, and allow access to clinical trials here for these patients. So going back to AI, uh, so we'll touch upon these kind of the last three things that we haven't touched upon yet, remote health monitoring. We did talk about risk screening a little bit and then virtual assistant or in this case, I want to talk about, um, remote patient monitoring and. Enabling patients, um, through self-management coaching to help manage their own systems. So one of the components of our, uh, project is collecting electronic patient reported outcomes. This, uh, graphic comes from a review of the liter. that was published just recently in 2025 that looked at electronic patient reported outcomes. Um, I mentioned this because there is a pretty significant use case for AI in these electronic patient reported outcomes in several different respects. One is that a lot of times these systems have, as I mentioned, self-management coaching components where if a patient reports pain 5 out of 10, Um, the, the application or the interface can engage the patient and say, have you tried this type of alleviation or have you taken your PRN pain medication? And so, um, using something like large language models or chatbots to be able to interact. With patients to promote that self-management, I think is a use case that we'll see more of moving forward. The other area that I'll mention is predictive capability again. So patients are reporting symptoms with some frequency, so possibly as frequently as every day. And so can we look at patterns of symptom reporting to predict deterioration in patients? So just pulling from this review, um, electronic patient reported outcomes, we know that they are effective. They improve communication between patients and cancer with cancer and their healthcare providers. They help with education, optimize self-management. I think we'll see even more of that again, as we start to incorporate some of these chatbot large language model capabilities, um, we can assist or predict, uh, adverse events and they are associated with improved prognosis. The next use case that we'll have the opportunity to test during the paradigm program is, um, intelligent task guidance. So that was that fifth technical area that I talked about. This slide is a little bit busy, um, but again, this article actually just came out January 15, 2025. What this is showing is, um. Uh, AI guided ultrasonography and what this team of researchers did was that they developed and optimized a model to be able to guide a practitioner as they're obtaining an ultrasound image. Um, in this case, they're looking specifically at lung ultrasound. And what they found was 98.3% of ultrasound exams performed by a trained healthcare professional, but not someone trained in ultrasonography were of sufficient quality to meet diag diagnostic standards. So what this means for us in paradigm is that we can have one of our oncology nurse practitioners. Or an EMT in the community, go into a patient's home with something like a smart ultrasound and be able to obtain an image, relay that image to the clinical care team at Huntsman Cancer Institute at Salt Lake City and make a diagnosis and treatment plan without the patient ever needing to leave their home. And so I think again as we're thinking about potentials this is one of the areas where I'd love to see more of this AI guided or AI intelligent um task guidance, um, because I think it can certainly help to improve care, uh, in where where there are not a lot of healthcare workers. And then finally this is just kind of a fun plug to put in. I have a colleague who's working on, uh, using voice detection and predictive capabilities and, um, the technology is, is starting to appear out there. Uh, this article talks about this hero app, um, that's specifically used to predict heart failure, heart failure deterioration. Using speech patterns, um, and it's specifically designed to detect a buildup of fluid in a patient's lungs. And this article or this app reports greater than 80% accuracy in detecting heart decompensation events an average of 18 days in advance. Pretty cool. Um, and again, I think related to cancer. You know, can we use similar things to detect fluid overload or perhaps even better assess patients' pain? A lot of our patients who live in rural communities are stoic and don't want to be perceived as a bother. So can we help get better accurate assessment data that can help us intervene and improve their quality of life, um, as we, as we move forward here. And then finally, um, I wanted to end on this slide. So hopefully we've talked about AI driven solutions that enhance patient care. Uh, we talked about AI used to reduce disparities, um, for particularly those who are living in rural communities, um, and then some implementation strategies for implementing AI into routine care. This picture here on the slide is a picture of two of our oncology nurse practitioners in rural Utah getting ready to do a home visit, um, to one of our patients who is enrolled in our rural Huntsman at home program, and I ended with this because I think of all the ways that AI can help make their jobs easier and better. And improve the care that they deliver to patients. Um, and so I think a key takeaway for me is that AI is not replacing the clinician in this. It's enhancing and helping us do the, the work that we want to do. So with that, thank you for attending, and I would love to take questions. OK. OK. Good afternoon. um I just had a quick question at the beginning um you were showing us how the um equipment needed to be cooled down do you know what kind of effects that's gonna have on the environment going forward with that being cooled? So that's a great question so um. I, I don't know the impacts of the environment. I, as far as I'm aware, it's water that's circulated through the computer to keep it cool. So I, you know, I, I, and I think your question is getting at environmental impact. Um, I, I will say relatedly though, so I, I can't speak to specifically that supercomputer. um, what I can speak to is just the, the energy required, um, to run these computers. So when we think about large language models like Chat GBT and Claude, uh, the sheer computing power and electricity really. That's required to run this is astronomical to the point where I know Google has a whole team that's looking at how they can get um inexpensive electricity, for example, um, so I think that's something that we're seeing and it was in the news this week because um Chat GBT and Anthropic who publishes Cloud, they both released new versions of their model and one of the. The benefits of the anthropic model was that the computing power required was just a little bit lower than chat GPT, but I think we're gonna see more and more of that from the sustainability perspective because it is a huge issue with these supercomputers. I can, yeah. My attention through Uh, occasionally oncologists. You see that. OK So the course of this that. So Yeah, that's a great question and so we we actually heard a lot about that tension in our interviews so the tension between giving hope and having a realistic picture of prognosis and I think one of the key things that and we did a lot of coaching and thought went into like how to have these conversations and there's a lot of literature on how to have a goals of care planning conversation um, one of the things that we talked about with oncologists related to this work was that you know this is not. Final. Like this isn't certain that with 100% certainty, the patient will not, you know, not be alive in 6 months. So there is a little bit of room to say, when you look at patients like you with similar characteristics to what you have. Only about, you know, there was a 13% likelihood that a similar patient would be alive in 6 months. So we did a lot of talking about how to deliver that message where you still can have room for hope, um, but you also can start to think about like, OK, realistically, like, you know, starting a 3rd or 4th treatment line, um. This is, uh, you know, what the prognosis picture might look like um the other thing that I'll say that was really interesting that came out in our patient and caregiver, um, interviews and focus groups was that there was an interesting dynamic between caregivers and patients or, uh, in our case, uh, participants who had a history of cancer, um, but were in remission and a lot of the caregivers expressed concern about their, their patients, their loved ones hearing that information. And in one of our focus groups I'll reme like I remember distinctly one of the the patient participants came out and said, I actually find this information empowering like as a patient, I would want to know this information and so it was just again very interesting um nuance there's a lot of, a lot of uh. It's a complicated conversation to have. So I think it, it was really eye-opening for us to say, OK, how can we share this information in a way that's meaningful, um, with people who want the information. Yeah, thank you for that question. Um, given the. Crazy amount of, of increase in treatment plans given, you know, symptom management things like that. How often do you anticipate needing to update that or do you have an AI that's actually driving those updates? Yeah, that's a great question. Um, so. I think that's I I will say one of the things that I meant to hit on that I didn't hit on, but I think gets to your point is just kind of how, how often you need to update these models, um, and we know that AI models, the, the accuracy and their performance deteriorates over time and it deteriorates as you move from one health system to another system. So there are a lot of things that we have to pay attention to when we're deploying these in practice. Um, one of the things that we heard from oncologists when we designed the model initially was how important ECOG or functional status score is essentially. Um, in our EHR ECOG is not a structured data field and so we were not able to extract it and incorporate it into the model despite the oncologist's feedback saying how important this was. They thought being predictive. Um, and so I bring that up because I think it just goes to show how quickly the technology is evolving where we built our model of like 2021, I think it was, um. It would be a lot easier today to use something like natural language processing to quickly search clinician notes and be able to extract ECOG score in a structured format and incorporate it into our model. And so I say that because I think that is a challenge with AI and healthcare now is just that need to constantly update and the importance of, um, you know, making sure that you're staying up to date and, um, what am I trying to say like. Really staying on top of the, the technology to, to be able to leverage technology to, to get the best performance that you can get. So I hope that answered your question a little bit. So you, you, you described examples of, you know, you, if you begin this line of therapy, 6 months from now, patients like me are unlikely to, you know, are less likely to be alive. Is there a way to incorporate even going further upstream to say, I'm starting this line of therapy, if I change these things. You could potentially be alive longer and it brings into light those conversations before you have a 13% chance of survival in 6 months. Yeah, so the short answer is no, that's not how the model was designed, and I think that's a huge limitation, right, is that the model was developed for a very specific, uh, uh, to be used at a very specific point to predict a very specific outcome. So one of the things we heard from our participants was what happens if I don't start the third line of therapy? How long will I live then? And we couldn't tell, you know, we couldn't say we couldn't say. And so I think, um. Again, from the implementation perspective and the big takeaway is that when these models are developed, it's really important to understand. How they were developed and what information they're conveying because they, the model that we developed cannot tell you something like that. And so, um, You know, I think it's just recognizing that limitation and, uh, hopefully, potentially maybe looking at other ways to develop other models that get at that piece of information or other things that are important to patients. Yeah. Any, uh, there's a hand there, yeah. AI Help to, uh, oh, sorry, that will help to predict. The best therapies because right now an oncologist goes through all of his research and all of his readings and you know how much easier would it be to just put in you know this data and it says oh the most likely therapy would be. Yeah, that's a great question. There are certainly are models that are being developed or have been developed that look at predicting, um. Uh, the effectiveness or outcome of the therapy for based on individual patient characteristics. Um, so one of the examples that the NIH team was looking specifically at immune checkpoint inhibitors to see which patients would respond better, um, based on their their characteristics. So I think the big picture takeaway for me for that is this idea of patient centered care personalized medicine, right? So being able to understand like this therapy is gonna work better in in this patient populations let's go ahead and use this as opposed to this therapy that may work better with another patient population. Um, so I don't have a more specific example than that, but I do know that that is, is being done currently. Actually, one example that does come to mind is, um, And it's not AI specifically, but looking at some of the genetic testing around cancers and deciding on which treatment to use with genetic testing. So that in and of itself doesn't necessarily involve AI. One of the, the things that a team and I are working on right now is, is using, um, an AI based chatbot to help support the oncologist and the patient with identifying those, um, ideal treatments based on the genetic testing of the cancer type or a specific cancer. There's a um PET scan. In determining the effectiveness. Of a certain. Uh, therapy, which you know, when you think about it that just seems so. Oh Population If that can be expanded. Right. Right, right. So that's the other part of implementation that I didn't spend a ton of time talking about is implementing is one point to getting people to adopt the evidence-based intervention is one component. The other component is scaling and dissemination. Um, and so a lot of my work does look at that with things like reimbursement, so you know, making sure that their reimbursements available for the interventions that we're trying to scale, looking at different healthcare delivery models to incentivize providers to implement the intervention at scale, um, so great, I mean, great points. I obviously I love this area so I could talk about it all day. Have you, uh, the use of human center design. Really nice. Is that, is that a one-off or is that something that you're doing? That's we're doing it in other areas. So for the Paradigm project, we're doing, um, user centered design and giving feedback to, so we're that technical area one performer. We're giving feedback to technical area 2 who's developing the vehicle. And, uh, for that it's a little bit of a different scale because they're, they're building like a physical, you think about ambulance size or RV size vehicle, um, and so we're in the process right now of conducting user centered design to outfit the vehicle, uh, where we're using virtual reality. um, so our clinicians, we're getting, we haven't done it yet, but we're, we're getting ready to do this, um, where our clinicians can put on the headset. It will show them the virtual reality of the, the CDP and they can actually walk through real clinical simulations and be able to give feedback on the design of the CDP like, you know, I liked having the medication cabinet over here, but the sink was too far or having the patient sitting here was just not the right configuration. Let's move it. So we're doing, um, that's another example of some of the user centered design testing we're doing. OK. Yeah. Yeah. Any other questions? Yeah. ranges and doing things virtually and sending a very expensive piece of equipment. And concerns for patients or staff and you know nurses. the doctor Yeah. Yep, so all things, so great question. The question was about connectivity related to, um, rural areas and if we're doing telehealth or virtual care, like being able to relay, uh, to, to have the internet connectivity or service to be able to communicate. With, um, with the virtual care center clinicians at Huntsman, these are ongoing conversations that we've had and so one of the solutions that's been proposed is the Starlink, um, the satellite satellite internet. however, um, you have to be able to see the sky in order for that to work and we have a couple of places, not many, but some places in. Uh, where some of our clinicians go where they're in a canyon, and so the canyon walls are too high, um, to be able to, to connect to that. So, um, we are actively looking at solutions slash workarounds, um. You know, right now our clinicians are, you know, for example, if they're charting in our EHR, uh, they may not be able to chart live and have to chart after the fact. um, there aren't too many places where we don't have any connectivity, but there are a few and it's a challenge. That we don't have a solution for at the moment, but we're, we're working on it. All right, well, if there are no more questions, thank you again and, um, thanks for inviting me to speak and I hope you enjoy the rest of your day just in case.