As the future of medicine, medical students, residents, and young physicians will be impacted by the rise of AI across the industry. Join the discussion and let’s talk about the consequences of AI, how we are engaging the technology and preparing it for clinical practice.
What core concepts in AI should all medical students and residents be exposed to now? What applies across disciplines (thus amenable to addressing in med school) and what is more discipline-specific (thus better addressed in residency)?
Can you each share what type of work you are doing at the intersection of AI and healthcare? How do you imagine these efforts potentially changing an aspect of healthcare in the near future?
I work with a team of diverse experts at the Duke Institute for Health Innovation (DIHI). Last year we worked to develop post-operative risk prediction models to help identify high-risk patients for referral to pre-operative optimization clinics. Determining who is at risk for post-operative complications is extremely difficult. Our models were built to help physicians determine who should be sent to these specialized preoperative clinics without overextending the clinic’s resources. Through our work, we demonstrate that our models have a higher sensitivity and specificity than expert opinion alone. Paired with physician expertise, these models can identify patients who are at high risk for serious complications, potentially lowering healthcare costs due to the prevention of complications as well as specifically targeting patients who require preoperative intervention. Furthermore, I believe that preventative medicine is an important direction and opportunity for machine learning in healthcare. As changes in healthcare reimbursement are made to incentivize prevention medicine through population health, machine learning, a prediction task, is well situated to help healthcare more efficiently make that transition.
I was introduced to the concept of machine learning through DIHI’s scholarship program. At first I had no idea what machine learning or artificial intelligence was and was admittedly intimidated! However, taking classes and working with a team to develop my first models, I learned that development is usually the easiest part of these types of projects. Successful implementation of new innovation and technology into healthcare is by far more difficult! I now frame my machine learning projects as a way to take x number of inputs to predict y with numerous methods to do so. Naturally, my understanding of machine learning changed as I was starting off from the very bottom. Over the course of two and a half years it is still changing as I continue to learn!
I was introduced to AI during my training in our retina clinic at the Iowa City Veterans Affairs Medical Center (VAMC). At our VAMC, many patients with diabetes participate in a telemedicine program in which they visit a photographer who dilates their eyes and captures fundus images that are sent to a remote reading center. An ophthalmologist at that center interprets the images and refers patients with findings concerning for diabetic retinopathy to our eye clinic, where we manage those needing treatment with a combination of laser and intravitreal injections.
In 2018 the first autonomous AI system for detection of diabetic retinopathy was FDA-approved. This Iowa-founded AI system gives immediate diagnosis in the primary care setting, bypassing a days-long delay for interpretation by a clinician and eliminating the cost/time associated with a separate appointment for telemedicine screening. The University of Iowa is the first healthcare system to implement this technology in a primary care setting. Diabetic retinopathy is the leading cause of vision loss among working age adults in the U.S. and less than 50% of patients with diabetes adhere to recommendations for eye exams, so improving screening and diagnosis of this potentially blinding disease is imperative.
I am currently collaborating on a project investigating primary care provider's perceptions of AI use in the clinical setting, specifically for the detection of diabetic retinopathy. We are interested in primary care physicians' level of comfort with incorporating AI into their practice and their concerns about its impact on quality of care, safety, privacy, and the physician-provider relationship, as well as potential for bias based on race, ethnicity, gender, and age. Our goal is to design an educational intervention that can help demystify AI in the primary care setting and improve physician comfort with these emerging technologies.
My work in AI is with a team of physicians and medical students in Howard University's Radiology division. In the past year, we've started off working with a modified version of open source image classification neural network, Inception v3 from TensorFlow to create a neural network that can identify medical
devices on Chest x-rays (CXR). The hope longterm is to fully develop a deep convolutional neural network that
can be applied to help reduce unnecessary healthcare expenditure and decrease errors in
electronic healthcare records.
It's rather amazing the results that have come up so far, with rising accuracy the that images have been fed into the program. My take on AI changing an aspect of healthcare is that it will at a baseline be utilized as a common verfication tool to supplement a medical professional's findings. When looking at it from a grander scale, it may altogether phase out parts of the practice in the field of Radiology. That's why for me, it's important to remain on the forefront of the field, as it may very well come to define what it means to be a radiologist by the middle of this century.
My interests in machine learning include work both in computer vision (CV) and natural language processing (NLP). Within CV, I am currently working on the development of deep learning models that can automatically classify MRI images and predict patient outcomes. With the implementation of tools that incorporate these models, we may be able to provide patients with treatment options specially tailored for their needs. The AI interpretation of a patient’s imaging, when used in conjunction with additional clinical factors and physician insight, will broaden access to precision medicine for patients with pathology identifiable on imaging, especially in care settings that do not have highly specialized faculty available.
Within NLP, I am working on the development of tools that allow us to take advantage of the wealth of unstructured clinical text data in the electronic medical record (EMR) in order to improve tracking and management of disease at both the individual- and population-level. Physician burnout is on the rise, and the increasing documentation demands of the EMR are a strong contributing factor. We can transition away from quality improvement initiatives that currently require manual surveillance of clinical documents to automated solutions that monitor the EMR in real-time and do not require providers to keep up with specialized or highly structured documentation protocols. With improved integration of AI into the EMR, the development of several applications to ease the friction between providers and the EMR are possible: intelligent chart search, automatic summarization, and intuitive question-answering tools, to name a few. In the near future, providers can expect to be presented with the most relevant information to a patient at the time of a new encounter and spend less time entering information into the EMR.
Medical imaging is our way of looking inside the human body without having to lift a scalpel. As a radiology resident, I spend hours poring through images to identify subtle findings and piece together diagnoses. However, my interest in AI started about 10 years ago, when I was an electrical engineering student working at GE Healthcare. As I programmed medical imaging devices, I became curious about the information in medical images that doctors can’t see – yet.
As an MD-PhD student, I developed new approaches to mine information from MRI data to discover latent signs of disease not yet visible to the human eye – and making these visible using a technique called “transport-based morphometry.” What is cool about developing a new technology is that it can be applied to many different problems. The technique was able to predict future osteoarthritis 3 years before symptoms in healthy people and discover structural circuitry underlying reaction time deficits in the brains of concussion patients. I am currently working on new research that is coming down the pipeline.
Personally, I am most excited by the diagnostic and treatment problems in medicine that are yet unsolved – either by humans or machines. There is a dark side to imaging and a wealth of information yet untapped. That’s where I think AI will enable the next level of diagnosis in radiology.
I currently use AI to assist me in segmenting various parts of the brain to understand neurologic diseases better as well as create augmented reality models at Vanderbilt University. Studying diseases that affect the brain heavily rely on medical imaging to detect changes in anatomic structures. Traditionally, one needed to identify and draw around the region of interest manually. Manual segmentation could be cumbersome if one had multiple regions of interest as well as a large group of patients they needed to analyze. I use AI to auto-segment various parts of the brain and calculate the volume of these structures. I can use these volumes to study atrophy in association with the severity of clinical symptoms. Also, I use a patient's MRI scans to create augmented reality models for patient education before neurosurgery.
I realized that patients have a difficult time conceptualizing a 3-D object from looking at MRI scans. In addition, showing patients standard brain models may not be representative of their own personal brain anatomy. I wanted to create a better method to explain surgical procedures. I use AI to segment the brain and reformat the structures to create a patient-specific augmented reality model - based on their brain scan. These models can be easily be shown on smartphones. As personalized medicine has become more popular, so should personalized medical education.
My work with Duke Hospital and the Duke Institute for Health Innovation has been focused primarily on combining different sources of patient data to create cohesive prediction tools and integrate them into clinical workflow. I am currently most excited about working with our Cardiology team at Duke University to improve identification, risk stratification, and ultimately management of cardiogenic shock patients. There are a few especially interesting aspects of this project to me: 1) shock is a complex and emergent disease process that is often recognized after the fact; 2) model inputs span a variety of sources including EHR, angiographic, and imaging data; and 3) there is tremendous interest, support, and input from our clinical experts.
To me, these efforts have set a great example of the deliberate collaboration between ML and clinical experts that every AI project in healthcare can benefit from. I have been able to appreciate first hand the clinical expertise in terms of defining use case, understanding workflow, and providing intuition for model development, and at the same time the technical expertise in terms of developing and rigorously testing models, finding opportunities in data, and creating robust yet user-friendly tools to incorporate practically.
My current work in AI involves the creation of models capable of reliably predicting outcome after acute ischemic stroke, using both quantitative neuroimaging as well as important clinical data. By combining data from multiple sources, such models are expected to be more accurate than current models in clinical practice, which tend to examine only a few discrete data points. Ideally, such models could eventually be incorporated into clinical trials which are seeking to better triage patients based upon their predicted outcome with and without treatment. In the past, I have designed software for interpreting the vascular source of perfusion throughout the brain by analyzing the pattern of radiofrequency signals that are used to non-invasively tag blood as it flows to the brain.
In addition, I have interests in using predictive analytics to improve the delivery of healthcare by identifying salient patterns in clinical data stored within the EMR. Hopefully, with further pushes by national organizations to enhance interoperability – the ability of health systems to communicate with each other more effectively and efficiently using more standardized methods to label, store, and transmit data – this concept will become more integrated with every day clinical practice.
Currently, I am at the Duke Institute for Health Innovation creating models to predict the decompensation of patients in step-down units before it becomes an acute situation. Much of my understanding of machine learning came from my Ph.D. where I worked on biophysical models of proteins to predict whether a mutation to an antibiotic resistance protein (beta-lactamase) would increase or decrease resistance. I began to see similarities between the methods we were applying in our research to those that could answer clinical questions in patient data.
Moving forward, I believe that AI models have the potential to aid in reducing workloads similar to how safety features in cars reduce driving fatigue. By serving in monitoring roles, these models can summarize and integrate various data streams and present the physician with a clearer picture of a complicated clinical scenario. Second, I believe that these models are a means to share clinical insight and approaches across health care systems. A model developed in a data-rich environment such as an ICU could still offer valid predictions in data-sparse environments such as a clinic. I think this is an interesting area of AI research that has been explored in other domains but may soon prove useful in expanding the application of AI models across healthcare.
In terms of my work in AI, it originally started in 2008 during my time as a researcher at the Kellogg Eye Center at the University of Michigan. We had developed a non-invasive measure of mitochondrial function that could be used to detect when metabolic stress was occurring at the retina. Our task, which is ongoing, was to develop a normative database that could alert a physician to when a patient had metabolic stress that was abnormal compared to the age-matched population. This is quite difficult as there is bias based on location of patients, race, age, and other factors. The ultimate goal though is to be able to notify a clinician when the metabolic stress of the retina is elevated and when the patient needs to be referred or evaluated for an underlying disease process. This also could be expanded to monitor disease and track how the disease is progressing or improving with treatment.
The other area that I have been working heavily in is the area of genomic medicine. I have been using various bioinformatic pipelines to identify key driver mutations in cancer, specifically uveal melanoma and retinoblastoma, and determine what the prognostic implications of these mutations are, as well as identify potential therapeutic options. This is really the idea of personalized medicine, which is going to be a huge area of AI in the future and one that every medical professional should be aware of. I'll provide an example in terms of cancer but this idea could be applied in other areas of medicine. The goal would be that if a biopsy is taken of a cancer, a genetic analysis would occur that provides the exact pathogenic mutations to the clinician with the best therapeutic options for those specific mutations based on the latest clinical evidence. I do not believe we are quite there yet but that is the ultimate goal.
Currently I have been working with a group of surgeons at the University of Virginia as well as several engineers to develop an algorithm to better predict physiological volumes from inexpensive imaging modalities using a machine learning approach, the project hopefully will allow healthcare providers who don't have access to more expensive imaging modalities to perform several screening tests that are currently only available at major healthcare systems.
I am also working with a team of surgeons at to develop machine learning models to quantify the risk of several pre-operative risk factors. We hope that this will allow providers to more quickly synthesize information from the EHR and better screen patients for risk and preemptive treatment.
I am also interested in developing deep-learning based approaches to identifying clinical features in images of physical exams and have been involved in these projects intermittently over the last several years. My goal with all of these projects is to improve healthcare efficiency for providers and apply new technology to help extend the availability of high quality healthcare tools to populations outside of major health systems.
Pending
Excellent question! First, the amount of medical knowledge is growing exponentially. It is estimated that in 2020, the amount of medical knowledge will double every 73 days. Today’s doctors and especially future physicians will need to grapple with how to manage and stay up to date with this ever-growing body of knowledge. There is an opportunity for AI systems to help cull information. As doctors interface with new AI technologies, it will become important to understand the contexts in which AI systems can be useful and when there are limitations. These are general skills that can be taught in medical school.
As for residency, an exciting shift in medicine is the move from population medicine to personalized medicine. Our understanding of disease and treatments will become much more nuanced in every field of medicine – which means that it is an exciting time to be a resident!
Connect
I would agree with Dr. Kundu. AI systems will become prevalent in the medical field, and it is essential for future users (medical students) to know how to interact with it appropriately. Medical students most likely will become a user of AI where they use AI to assist them in the clinic or the hospital and/or a developer of AI in which collaborate with researchers to improve research in their field. I've taken on the task of holding seminars for students at my medical school to introduce them to artificial intelligence in explaining to them how it can be used in medicine as well as business. I make sure to address the short-comings of specific models such as being trained on a biased sample that does not pertain to their patient population. Also, AI does not replace one's best clinical judgment. For students that are more interested in gathering a better understanding of Artifical intelligence, I explain how basic machine learning and deep learning models work. From my experience, this helps prepare medical students for the future of healthcare.
While I haven't started residency, I would imagine residents applying machine learning or deep learning methods to streamline their work or develop systems to help their patients best. With the increasing understanding of diseases and a better understanding of the healthcare system, residents are capable of building more sophisticated and clinically relevant models.
Connect
Wonderful that you are already offering seminars for your students- would love to see your content.
Agree that not all will want the deep dive into the tech aspects. As a former surgeon, I was accustomed to learning to use new tools in the OR. But awareness of models of continuous deep learning (as opposed to “”locked” systems) feels analogous to an instrument could change in your hands as you use it. Thanks for pointing out the potential to amplify bias embedded within data sets - we must be vigilant about conclusions, plus that’s an important reminder that providers contribute to some of those data sets as we work in EHRs, so should be thoughtful about our entries.
Connect
Agree that awareness of both power and limitations is key. Your comments about the critical nature of skills in managing information align with current efforts to develop the #MasterAdaptiveLearner.
Glad to see young physicians leading in this space to amplify the physician voice during development, in order to best serve patients. And it’s great to hear your excitement about being a resident - no better profession!
Pending
I agree with the above that, at a high level, all medical students should be taught the utility, limitations, and biases inherent in AI. This is very applicable to every specialty and will allow students/future residents to be involved in AI projects without a strong technical background.
I also agree that not all students will want to deep dive into the technical aspects; however, I think it is valuable for everyone to familiarize themselves with AI terminology/design (e.g. feature representation, train/validation/test sets, cross-validation, calibration, etc.). This definitely goes a long way in bridging the gaps between technical and clinical teams.
Connect
Thanks for the thoughts!
Your responses made me wonder - in addition to knowledge, are there relational factors that enhance a user’s interactions with AI? What are perspectives or attitudes of the successful user? What makes for a good human-AI team?
Pending
Communication is the essence of any great team. From the human side, the ability to give feedback to AI systems will help them become better over time. From the AI side, explainability will be helpful to map machine logic to human logic. I gave a TEDx talk about how AI explanability enables new discoveries in medicine!
youtube.com/watch?v=HrKzXLgGoh...
Pending
Dr. Kundu, this video has been an excellent resource! I share it regularly as part of the AI informal boarding for staff who ask me for resources to learn more.