Chapters Transcript The Future of Clinical Cardiovascular Research – Cardiology Grand Rounds Eric Peterson, MD, MPH, Director, Duke Clinical Research Institute, presents The Future of Clinical Cardiovascular Research. ERIC D. PETERSON: Thank you very much. It is quite an honor and a privilege to come out here. I have to admit, this is my first time to Washington in St. Louis. I might have-- if I'd have known it was so wonderful and beautiful, I might have come here and stayed much longer. It's been great to catch up with a few friends. I look forward to doing the rest of it during my time here, as short as it is. You know, when you try to come with a title and a topic, you come up with these sometimes audacious ones. And I don't know what this was. This was a late night-- we'll just do The Future of Cardiovascular Clinical Research. It is a year for audaciousness at Duke. If you haven't noticed, our football team is now trying to take on the number one team in the nation, and we actually have as many wins, or at least only the same amount of losses, I guess, in December for the basketball team and the football team, a first for us. So these are my industry relations. If you want to now go to the website, you can check out where we get our support. Upwards to sort of half to 60% of our funding does come from research, or from clinical and industry sources. So that has to be acknowledged. Let's begin the talk by the amazement that we find every day. Doug and I have, for many years, done the Northwestern conference, and in part, this allows me the pleasure of seeing some of the best and brightest young minds present their science. And we should live in an era of wonder. There's no doubt about it. And the differences that have happened in our field during just my short career have been nothing but awe-inspiring. We have unraveled the genome of individuals, as our discussions covered last night. We now, through using high throughput technologies, can determine anybody's transcription patterns, even down to the cellular level, even down to the moment in time, to understand better what is making them take inside, and ultimately, to unravel how and what causes disease, and even what is disease in this modern era. When it comes to the idea of imaging, we now can image in ways that even I can understand, and actually understand what I'm actually looking at. 3D echo, nuclear medicine, CT, and even MRI can allow us, again, to diagnose disease in ways that we never could in the past. Yes, that is very large thrombus in need of some novel anticoagulant. And we have treatments that are, again, just short of amazing. The number and diversity of drugs that we have available, to varying amazing devices-- again, conversations from last night-- to actually now beginning to think that we can repair organs that we thought in the past were lost. It is the best of times. But also we are challenged with the Tolstoy, the worst of times. There are major challenges to our field that perhaps are not new, but remain daunting for us. We have poor translation of bench finding to effective therapies, drugs and device approvals remains a challenge in this country, clinical trials-- I will cover later-- are in somewhat of a crisis in the US, and there's poor adoption of the things we know that work into routine, everyday practice. One need to only look at this slide to see both the promise and challenges that exist in our field. This slide, now even becoming dated, describes the numbers in thousands of compounds that any one pharmaceutical industry might face in terms of the development of these agents. Yet as you go down through this funnel, both the amount of money that's being spent on these various developmental agents, as well as on the right side of your slide, the time factors and delays that happen along the way, such that it's up to 15 years and several billions of dollars before out comes the other end of the funnel one agent that will be useful in routine clinical practice. This is even a more daunting slide. It looks at the declining return on investment that happens in the pharmaceutical industry. Up to 1950, you got around between 30 and 40 products per billion dollars invested in terms of new products onto the market. Yet every nine years, there's a halving of the success rate per million or billion that you're actually investing into this, such that by the most recent years, as you can see, well under one are generated per billion dollars that is invested, a major challenge. And then for us as scientists, it's not only support from industry that is a challenge these days, when the industry doesn't see those returns that they used to. But even our own government has been challenged in recent years, and a decline in the NIH funding levels, in particular for NHLBI, have been striking to many of us, and have frightened many who are thinking about careers in science. But the challenges don't even stop on that level. We can talk more about how medical care is being delivered consistently. Costs, both in the US and around the world, continue to rise, and many have questioned the value for that money that's being spent. There's widespread variation in clinical care. Evidence is often lacking. Adoption of evidence is slow and incomplete, and I'll show you examples of that. And there's disparities abound. So a couple slides just to defend those points. If we look at the rise in cost in both the US and around the world, one can see the degree to which our costs continue to rise relative to the incomes of those in the US. But even more striking is if you look on, again, right side, where you look at our life expectancy in the US, what do we get in terms of value relative to what we pay to take care of our citizens. And when you see these degree of divergent results, you may argue about a decimal point, but these are magnitudes different. So something is being done slightly imperfectly here, and the return on our medical investment into changes in human life in the US can be challenged. These are even more daunting facts. This is a display of the 19 top developed or Western-world countries. And then you look at where we rank in terms of quality of care being delivered to those individuals, or in this case, lives lost relative to comparator countries. You see that in the red is not necessarily good. That means you're below the mean. And then the rankings are shown in the numbers, as you can see. On almost no front is the US doing well relative to its peer institutions. So although I'm very proud of the health care system that we have in this country, there are stark facts that make you challenge whether or not we have the best system in the world. Variations like this have been shown multiple times, and economists are thrown by this. With this wide variation-- three, four, or five-fold variation-- in almost any test that you look, at our spending charts that we see, one is challenge to understand, where is it being done right? Is it overuse in certain areas, or underuse in others? And many believe that, in fact, the challenge is that we just don't have enough evidence. If we look at our own guidelines-- and cardiology should be very proud of being the some of the most evidence-based guidelines in the world-- but then you look at these conditions, and look at the number that are based on large randomized clinical trials, or level I evidence, or A evidence. And as you can see, for no condition do we have most of the recommendations based on level I evidence. And if you get to conditions such as valvular heart disease, it's down under 1% of the recommendations actually have clinical trial evidence to support this. And then if you take it a level lower, and now look at not only clinical trials in general, but clinical trials done in US populations, it gets to be even more of a bigger challenge. We ran some data within the action or the large MI registry run by the ACC/AHA, and looked at patients who would have been potentially eligible at that point in time for a large randomized trial, and then how many of those patients were actually enrolled, and the number comes into the 3%. We've actually looked at this in our own institution, and we do know better in terms of the number of patients actually eligible for trials that are consistently enrolled. And then of those patients that we enroll in the trials in the US, how many of those actually represent the patients that we see every day in routine clinical practice? Again, looking at acute coronary syndrome, we look at the average patients enrolled in the largest acute coronary syndrome trials. Their average age was around 63. If we looked at it within registry populations, it's a full half a decade older, populations, significantly higher degrees of comorbidity, making extrapolations of the results we find from those trials a challenge. And then even of that evidence that we get, how often do we adopt it routinely into our clinical practice? Classic quote that it takes upwards of 17 years for only a small percentage of the evidence to be adopted is out there, and should be a quote that most of you are daunted by. And then there's sort of the real-life example of this. Early on in my career, I looked at the patterns of use of simple, evidence-based, or level I, class 1 indicated therapies in myocardial infarction, and found that, again, a majority of the patients who were eligible for these therapies in certain conditions didn't receive them, and wide variation between hospital types. In the yellow bars here, we showed the leading institutions in the country, the top quartile. And even among those, there was imperfect care, patients eligible not receiving therapies that we would think would be simple and easy to give consistently. If we look at the red bars, however, those are the bottom quartile of US hospitals, 20 to 30 points lower, in terms of the use of these medicines that potentially are life saving. Furthermore, we then went on to correlate the degree to which you delivered evidence-based therapies and outcome, and find 40% variation in mortality rates among these institutions from the leading to the lagging institutions. Now again, there are very few therapies that we've developed in medicine for which there are 40% differences in outcome. Yet consistently using the medicines that we know are best in these patient populations, we could have had that much of a difference. And then there's this issue of disparities that are out there. Years ago, we looked at the use of implantable defibrillator devices, and found that, in fact, first off, that only about a third of the patients who were eligible to receive these devices actually did. So no one was getting good care. But then if you happened to be a white male, you were pretty golden. I like that. But if you happened to be black or female, your odds of getting these devices then fell precipitously. And one could say, well, that's in a very expensive, high-technology device. What about simple things? So here we just compare people who had private insurance, HMO private insurance, to those under 65 who were on the Medicaid side. And we look at the use of medicines like Aspirin, pennies a day, and we look at their odds of receiving these therapies relative to their peers, adjusting for everything else clinically that we could, and we still find 20% less likely to receive these simple therapies. Kind of disturbing information because of the consistency by which we see this. But I'm not here to only give problems. And in part, I want to talk a little bit more about the future. Where should medicine go, in particular cardiovascular clinical care? So the few key points I'll bring up for you are that we are entering a new era with expanded data possibilities, both on the terms of genomics, proteomics, et cetera, but also in the degree to which we capture routine clinical data in electronic health systems that will allow us to use that information to become smarter in the future. But what we need to do is to convert ourselves from our old world, where research was done in a lab, or collected in somebody's clinical database, to one for which it's a part of routine clinical practice, and the scientist and the clinician are working routinely together. Research is a part of everyday life. And this isn't about my data and how large my database is, but rather how our data can be used to change practice. This concept of a learning healthcare system dates back many, many years. At Duke, a man, Eugene Stead, was quite a character. He was our first chair of medicine, and well before the idea that computers were just coming into vogue-- actually, at the time punch cards, for those who are not familiar-- he talked about the idea of chronic disease can't be studied by the ways of the past. They needed to use computer technology, in fact, to exploit this to change medicine. Now, this was well over 50 years ago. And in fact, his dream, though, changed how medicine was done at Duke. He created to Duke Databank for Cardiovascular Disease. He most importantly created a generation of statisticians and clinicians working together, who understood the idea of using routine, collected data on a given patient, and then following that patient upwards in time, with an idea of how can we treat the next patient better? Currently at Duke-- and I'll use it as a model. I will tell you it's idealized in this rest of the talk. So that's my footnote-- but as a model of potentially how a learning healthcare system ought to work well. There's this concept of a transition of medicine. As we know, the earliest bench to bedside, and the T1 transitions to T2, to ultimately translation into larger populations, and then finally out to the community and to the world. And we've tried to set up institutions that are seamless across those that allow this research to be done, again, from the scientists in its earliest discoveries, ultimately out to the community and global health worker. I will focus in on the Institute that I now direct, because it is quite an honor and a privilege. And I think that we actually extend on both ends of this to touch the ends of science. So on the one hand, we deal with the earliest discoveries in phase I units. On the other hand, we'll deal with community and global health. We are a mission-driven organization, which is very happy and proud to say. Our goal is in part to develop and share knowledge that improves patient care around the world through innovative clinical research. How does that translate? Well, in fact, we've now grown to a small of 1,200 people that are devoted to this mission. In fact, we integrate in the facts that Gene Stead told us years ago. You needed to have good clinicians, good questions, good statisticians who could interpret the data and help with that, and then good operational people that could actually carry out consistently how this research ought to be done. It ought to be driven by a mission, again, to define what is new knowledge. But then don't stop there. Actually figure out how to get physicians and patients to adopt that to improve patient health. It ought to be collaborative. Again, we've gotten somehow 120 physicians to work together to support this place on a routine basis. And it ought to be constantly evolving. We ought not to say, what we've been doing is a successful model. It will work in the future. We ought to be ultimately changing it. These our current strategic plans that we've laid out for the future. In part, DCRI ought to be a place that in fact brings the promise of personalized medicine to reality; that we need to lead innovations in both how clinical studies are done, randomized trials as well as registries; that we need to continually push the boundaries of how we analyze data. The future is, in part, not in the collection of the data, but those who can interpret that data in a meaningful and measurable fashion. We need to, in part, support novel implementation and outcomes research studies, so that the data that we actually-- or the knowledge that we collect on the research side are routinely applied in our patients. We need to design educational platforms for the future, both training the next generation of researchers, as well as educating the world on what we have discovered. And finally, we need to do this on a global platform. We'll cover all of these points in the subsequent talk. So we talk a little bit about the T1 and T2 translations. On the T1 side, we need to do much more deep investigation of individual patient populations. In this case, we use the science and characterization of patients early on, and those with disease processes to test out our newer therapies and see if they'll make a difference in phase I studies. On the late transition side, it's got to be broad. We need larger patient populations, larger ends, so that, in fact, we can test the generalizability of given therapies in clinical practice. So on the T1 side of things, we've developed a state-of-the-art clinical research unit. It's 40 beds. It uses a fair degree of high-tech infrastructure. It allows patients who have disease in a clinical setting within a hospital to be tested routinely by scientists in first-in-man studies. I will tell, you this unit loses us $3 million or more every year. But we continue to support it under the idea that, in fact, this is an important and key thing that science needs. Painful for my budget. We also have tried to integrate science into our studies. But I will tell you, we're here imperfect. We've now reached out across the campus and worked with many of the experts around the campus on doing genomics, proteomics, metabolomics studies within larger clinical trials. And then, as a third component of this, we are now trying to move outside of the clinical trial settings, and move this into routine populations. So using samples of blood that had been available, we started this series of studies that are being done in the Kannapolis area-- around Charlotte, for those who are familiar-- where we're looking at trying to characterize certain disease states, ultimately moving to a prospective framing hand for the 21st or 22nd century. And then ultimately moving into doing this more routinely to define new disease states. Shifting from the phase I stuff to the second phase, or T2, you have to recognize upfront that it is a challenge to do clinical studies in the US now. It's not by chance that it's become both expensive and sort of onerous on those who want to carry out clinical studies. If one looks at the complexity of given clinical trials, the number of data points collected, the number of monitoring efforts, the number of regulations that are put onto this field, and ultimately how that translates into extra, perhaps billions of dollars, that we put into studies that it's unclear, again, whether there's value in terms of a better answer coming out the other side. Where do we think the future of clinical trials are going? Summarized here are several points that I'll cover in the next few slides about simplification of trials, and then the development of ways in which we can, again, bring trials into routine clinical practice, so the in fact they're done simpler, quicker, and in larger numbers of patients. One must just remember back to the oldest days of GISSI. If you remember, GISSI was a trial, the first of which compared use of Aspirin, and defined its role, as well as lytic agents in the care of myocardial infarction patients. That was started without funding. That was started among clinicians-- in this case, in Italy-- who wanted to understand whether these newer drugs would work, and used simple randomization in large numbers of patients to show the benefits of the therapy for, in those days, very few dollars. As we move forward, though, the world has gotten more and more complex. The FDA is, in part, challenged to create more and more regulations. Industry has made-- there are whole industries, CROs, that have developed around this idea of large but complex studies. We are working very hard with all three parties, in part with industry, as well as FDA, to try to simplify back, and emphasize those parts of clinical trials that mean and are most meaningful. This idea of quality by design, that you would, in fact, design a study from the get go, with what are the few important factors that we need to know consistently, and how do we emphasize getting those data? But then scale back the parts of studies that, in fact, there's limited new value. One way in which we think that we can do better as we move forward is the idea that, in fact, collecting data specifically for a trial, separate from the care of that patient, perhaps is not the most ideal or efficient manner to do this. As we all know, much of the data that we collect is collected now in electronic health records. If that were done-- and a big if-- in more standardized fashion, and that data could be retrieved across multiples centers, therein lies much of the information that you might need for a clinical study. At the very least, these clinical registries can, in fact, be used for planning. You can get an idea of how many events are occurring and what are the populations who are at risk. You can identify investigators who might be interested and treat large numbers of these patient populations. You can even go down to screening large groups at your own institution, so that it's not a nurse just looking for one patient to show up in a clinic, but rather you can go through an entire database within your hospital, and identify the few patients that might qualify for a study. And ultimately, we'll try to get to the stage where we use these clinical data to, in part, substitute or augment the data that might be collected within a randomized study. Can this be done on a real-world basis? Well, those of you who attended the European Union's, I think, was TASTE-- saw the amazing power with which clinical registries can be used to carry out routine clinical trials. The case study looked at thrombectomy catheter catheter. It was the largest device trial done in acute myocardial infarction. The amazing things about this study that we're not defining, that in fact may question whether thrombectomy works, was the numbers of patients they enrolled relative to the number of the patients that were actually eligible. It reached upwards of 50%, and the Swedes are what embarrassed that only about half of their patients who were eligible got enrolled. And then one looks at the price per patient. $50 doesn't buy, at our place, a pencil, let alone the enrollment of a patient within a randomized study. An impressive, impressive result. But one can say, the Swedes can do things that we can't do. And in Peterson, I'm supportive of that. But if you look at it, can we do it in the US? Well, maybe. Sunil Rao and colleagues at Duke just reported this out at the TCT meeting, working with the American College of Cardiology. As you know, they have a large database of patients undergoing cardiac catheterization that represents almost every hospital in the US. These patients in it were used-- the data that we put into those routinely, into those databases-- were used as the backbone for carrying out a trial of radial versus femoral artery catheterization in women. The results of that allowed that trial to be enrolled at a much quicker pace than had been anticipated, and it had cost, again, a fraction of what it would have been had we had to pay for each of these patients to be enrolled separately. Another example of this is not even the routine individual randomized trial but the cluster randomized trial. We carried out a larger trend longitudinal registry called TRANSLATE-ACS to look at the patterns of care of patients with myocardial infarction once they went out of the hospital. But we decided now, since we had this large registry up and running, what if we threw randomization in at the level of the institution? So as many of you know, platelet function testing is of great debate, of whether it is of value or not. We decided to randomize at the level of the hospital giving the VerifyNow devices to hospitals that weren't routinely using it. And the idea was, in fact, that hospitals were encouraged to use it. We didn't give prescription about what the physicians should do with the information they had, but wanted to in part see how would it affects their clinical care, and choice of a P2Y inhibitor, and then ultimately, how did it affect outcomes? Now this gets back at, if you're good at statistics, a way of actually comparing the efficacy of the device. But it also gets at a much more real-world question of how do physicians incorporate knowledge and use it in their routine clinical practice? One can-- this idea isn't just limited to our registries. More moving forward is this concept that we can get whole centers to be engaged in this type of work. NIH is fully behind this. If you hear Mike Lauer talk, this is all he'll talk about. But they've built out the collaboratory system. Rob Califf and Rich Platt from Boston now head this effort to use these centers to carry out cluster-randomized studies to answer simple clinical questions. And then more recently, PCORI will announce in the next few days-- and I don't know if you have any applications, but we certainly do-- looking at the idea of, can large systems, two or more, collect and routinely put their data into a large network of institutions sharing electronic health records, again, with the ultimate goal, once this data is shared, it can be linked to do various forms of discovery science, as well as performing clinical trials? This is a big bet. There's over $56 million that will be sent into aid centers. Actually, this has expanded since the slide was put out. I know that it's going to be up to 11 systems that will be selected. And then another $12 million that's going to support up to 18 patient-centric, or patient-empowered clinical registries. The former is well known to you, the idea that hospitals or whole areas of the country could share electronic health data is pretty well known. The idea that patients themselves can share information, and collect routinely valuable information for us as researchers, is a little bit more of a challenge to our model. We were talking just before this conference began about the ideas of waves of the future and when you believe in them or not. And I can tell you, I was a little doubtful for the idea that my patients would get engaged so much in the idea of research in their own disease state that they would collect routinely valuable information. But having dealt with groups like PatientsLikeMe, I understand the idea that social networking is perhaps not only a tool that my children use late at night, but might be one that we as a researcher might want to use more routinely in my clinical practice. We partnered with Familial Hypercholesterolemia, of FH society, to in part allow these patients to routinely enter and collect the data on how they're being treated in routine clinical practice. It's quite important, because for patients with familial hypercholesterolemia, they're young in general, and they travel a lot. So while they might be seen in one routine lipid clinic at Barnes today, next week they'll be at another center somewhere else around the country. So the traditional model of collecting data only through the physician-centric data collection will not work in this group. And we need to, in part, use patients as our mode of information gathering. The idea that we will identify drugs that work can't stop by the time that they get through the FDA. As daunting as that is, that system fails many, many times, Vioxx being one prime example of this. And if we use routine reporting of problems, many problems will be missed. We talk about here-- an editorial talking about the challenges with the idea of using a retroactive system to try to identify problems with drugs or devices in routine clinical practice, and call for this idea that we need actually a prospective means of monitoring the safety of drugs and devices that have now reached market. Mini-Sentinel became one model of this. This is a program run through the FDA, in partnership with many large health systems, and other insurers and collectors of large databases. They can use their information-- in this case, crude claims information-- to understand what diseases patients have, and then what treatments they receive. And then, through a distributive analysis, can understand and identify when problems arise, whether those problems are real or not. A prime example of the power of this system came with a question of dabigatran. As many of you know, this is a factor Xa inhibitor. It reached market. When it hit the market, there was these reports coming into the FDA that patients were bleeding badly on this drug. And in part, those were true. Each of those cases were individually correct. But the problem was, you had no idea of a denominator, nor a comparative number. So having that information based on, now, hundreds of thousands of patients treated across the US, the FDA was able to quickly understand that, in fact, while there were bleeding events with dabigatran, those events in routine clinical practice were much lower rates than those seen actually with warfarin. It's just people don't report a case of one more patient coming in bleeding on warfarin. Can we use more sophisticated information to give us better answers about the safety and efficacy-- or effectiveness-- of therapies in routine clinical practice? We've tried to work over the years with the major professional organizations, again, to use the data that it is in their own databases to help define, again, not only new knowledge, but the safety and effectiveness of the therapies we're using to treat these patient populations. These databases are quite large, and representative of much of the care across the US. The data become more powerful if you start to combine and link these databases together. So those databases, while deep in clinical information, are shallow in terms of durational follow up. But what if you link that information, now, to claims-based information sources, so that you can follow patients longitudinally over time? If you link it to samples where you can get better descriptions of patient populations, you can do translational discovery. If you have information about how the patients were treated, both cross-sectionally, as well as over time, you can start to do comparative effectiveness research. And then if you now interject randomization, you have the power of, again, doing a practical clinical trial. We've been supported by several different agencies. Here's one. The Agency for Healthcare Research and Quality has allowed us-- and this was our aims-- to develop these clinical registries into longitudinal platforms for discovery and also application in various disease states. Most recently, as an example of this, I've had the pleasure of working both with the surgeons as well as the cardiologists on this new and developing technology TAVR. And in this case, it was a great harmonization of interests and efforts, where surgeons, cardiologists, FDA, CMS, and industry all got in a room and said, we'll create one national registry that we'll use to do the post-marketing surveillance of this device as it hits the market. in part, it'll allow us not only to tell the safety of this device and other devices as they get to market, in part, it will allow us to extend out to do other studies to see when expanded indication uses should come on to the market or not. And finally, it can be used to determine whether us as clinicians are adopting these technologies and getting similar results that we got in the routine clinical trials. You'll see a lot of publications hopefully coming from this registry over the years to come. But it isn't enough to do publications and to do discovery. It's as important to now translate those discoveries into routine clinical practice. So the final part of my talk will sort of center on this part of the idea, and this last translation from knowledge to practice. We've discovered a lot over the years about how to do this well, and I thought I'd gotten pretty good at it, be honest. We've found that if you feed back and measure routinely what people do, in part, you go through the Kubler-Ross stages of shock and denial, but finally get to the idea of acceptance, and ultimately change. Physicians are remarkably responsive to data in most indications. Over the years, of many the registries I've participated in, I've seen care going from those challenges I originally reported to almost perfect care in many cases. Myocardial infarction, nowadays, if I showed those slides, you couldn't distinguish hospitals, because everybody is sitting up around the 97th percentile. Amazing responsiveness to data. We've shown that, in fact, this isn't just an epiphenomenon, but rather actually tested this in routine clinical trials. Years before it became popular, we carried out a cluster-randomized study of quality improvement in the Society of Thoracic Surgeons' national database. We randomized institutions to emphasize, in this case, the use of IMA and perioperative beta blockade, and then saw the institutions for which we gave this targeted feedback and educational effort on seeing the changes in practice in those relative to the other institutions, a rather remarkable example of this type of work. You can do it in the US, but can you do it globally? So most recently, we worked in Brazil with investigators down there in a cluster-randomized study to see if we can improve acute coronary syndrome care in Brazil, a developing countries, again using very similar processes of education, feedback, simple tools, reminders, et cetera, to show that, in fact, we could drive differential care into the institutions in a randomized study, just within these simple interventions that, in this case, were less than a dollar a day to those institutions. But I can do it nationally. I can do it globally. Can I do it at my own institution? This is the grief that Rob Califf often hands to me. I head up the Heart Center's performance evaluations group, within our own center. And I find that, in fact, it is a challenge. I've practiced in Duke cardiology clinics for now upwards of 20 years. And for all those 20, I've never been told how I do in terms of the panel of patients I treat on whether I do simple things, like control their blood pressure. Now, we know that blood pressure control is a good thing. We now argue about what the right number is. But we know the control is probably good. How do we do at Duke with regards to controlling our patient population, their blood pressure? On average, we are about 70% of patients have their blood pressure controlled, which is good by national standards. But it's not Duke numbers. These are not top of the country, top 10. We looked at it by individual provider, and lo and behold, you have a threefold variation among individual providers at Duke clinics, all of those great Duke docs, in terms of the degree to which your blood pressure is likely be controlled. And this persists after adjusting for the fact that I take care of the poor patients, and I take care of the uneducated, blah, blah, blah, blah, blah. Controlled for all of that, threefold variation persists. So I know the game. I just need to feed back this information to my colleagues, and everything will turn out to be great. Variation will shrink. Performance will improve. Life will be good. So I did that exact experiment with one of our residents, Ann Marino Barr, and we fed back the information for a given year. We took the grief. The poor residents still survived, but barely, with telling Duke cardiologists that they didn't know how to treat blood pressure. And over the course of the year, this is what I saw. If you're having trouble determining the effect, it isn't there. And this was humbling, to find out I can, again, change things around the world. I can't change my own institution. So Rob is right. But we haven't quit yet. And this is the kind of literature, now, that I read on planes, rather than reading necessarily the latest journals. If you haven't got a chance, read the book Switch. It's one of many that talk about behavioral modification. I'll simplify it for you. Really, like most business books, it's three sort of ideas. Beyond that, can't go much further. But it's this idea of direct the rider, which, in fact, comes down to the idea of making it simpler for people to do what they need to do routinely. In this case, we told people three months after they treated a patient, by the way, you didn't control their blood pressure or make a change that day. So in this regard, we're developing IT things that will, in fact, flag it a little bit for you to know, by the way their blood pressure is 150 over 100. You scheduled them back in six months, and you made no changes to their medication. The odds that that person is going to be controlled when they come back are probably not good. You want to potentially address this. We're doing very similar things, now, with anticoagulations. We have to motivate the elephant, and that's the concept that, in fact-- these are their phrases, not mine-- that we need to provide a little bit of incentive behind these things. So in this case we're working with our division chief to try to put a little bit of money on the line. And then, finally, this idea of shaping the path, which is to, again, make it easier to work with in this model, a collaborative model, with, lo and behold, those things called patients. Because, in fact, controlled blood pressure isn't just what I do. It is, in part, depending on my patients working with me to monitor their blood pressure at home and ultimately to take their medications. So how can we get to a world of shared accountability? Again, technology provides us with answers in this modern era. So in fact, we worked with Microsoft and the Heart Association to develop a system called Heart360. It allows patients to monitor their blood pressure in their own home environment, and then feed that information back via their computer, or ultimately, now, their handheld device, to get information back to a clinic, so that you can actually see what happens to your patients' blood pressures at home. We've tested this out in randomized clinical trials, at Kaiser as well as in the Duke system, and it works. But those are randomized clinical trials. So over the past year we did a project called Check-It Change-It, where we took on the audacious goal of can we affect blood pressure on a county level in Durham by trying a combination of both this technology, as well as community health workers, that would take the technology out to those who don't have the technology. So putting kiosks into the poor neighborhoods in Durham so that, in fact, people can get their blood pressure checked, and ultimately, with the help of a clinic community worker, get that information into the computer, so that we can actually make changes in their blood pressure medicines on our end. So take the clinic-- rather than the patient coming to the clinic, the clinic now goes to the patient. So the final issue-- I want to just discuss this in the closing minutes here-- is the idea that, in fact, even though I've spent much of my talk here, and all of my career here working in the US, that, in fact, these same issues that we discussed are ones that are more global in nature than they are focused just in the US. As we see where cardiovascular disease is going, the Western world, while it has been the leader in the past in terms of cardiovascular problems because the developing world was suffering from infectious disease and other killers, communicable diseases are getting controlled, and non-communicable diseases are now the rage in the developing world. Because of that, we are carrying out clinical research studies around the world, and have partnerships set up to carry out these studies both in the US and, again, in other countries around the world. But to carry out that research in other countries is a challenge. And, in fact, we had, again, another discussion last evening about the degree to which data varies depending on where it's collected and how it's collected. So we are trying to figure out ways and partners in these places that will actually do a good job to produce high-quality, ethical collection of information, a major challenge in many places around the world, but one in which I'm enjoying spending a fair degree of my time. And it doesn't only get about the idea that we can go into countries and get their data, and test our drugs and our devices in their setting. It really ought to be about trying to figure out ways in which we can help their health system, and their people, ultimately, get better health. An example that we've been able to team up with us through an NHLBI global health grant has been to work with the George Institute in China, where we carried out a series of studies, one looking at controlling blood pressure with salt substitutes, and then another one that was presented at the Heart Association meetings looking at better primary care being delivered in rural community settings through a number of different interventions. Now, I will tell you, these first sets of interventions, though huge in numbers of patients, were not successful. So it, in fact, challenges us once again to think about what's the next generation of interventions that we can do to be more successful. But the challenge of actually working in these countries, and actually the gratification, of working with the number of individuals in rural settings has been quite gratifying for me. And then perhaps the best way to help these countries develop is to develop their own leaders for the future. So in fact, at any one point in time, we're just about to cross over, 40 Fellows are training at the DCRI. And again, Doug and I discussed the idea that, he committed to you guys, but we actually do like spending a lot of time with new individuals. And this has been part of the richest part, and perhaps most successful part, of my career. We work on a diversity of who these fellows are. But now, upwards to nearly 30 of them come from countries other than the US. So it's a very diverse group of individuals coming to spend time with us. Thank you very much. Created by