As the future of medicine, medical students, residents, and young physicians will be impacted by the rise of AI across the industry. Join the discussion and let’s talk about the consequences of AI, how we are engaging the technology and preparing it for clinical practice.
We appreciate the expertise and time that all of the panelists have generously shared on this topic. Your leadership and dedication to patients will serve as an important example to others. A special thank you is extended to Ajeet Singh, MPH, the medical student leader who made this panel possible with many hours planning, recruiting the incredible talent and experts assembled here, and structuring the questions and working with AMA staff. We thank him and all of you for advancing the ideals of the profession.
How do you communicate with medical students and physicians about why they should care about AI in healthcare?
Many medical students and physicians are skeptical of hyperbolic statements made around AI - and rightfully so. While several AI-powered tools to help physicians have been developed, such as those that utilize computer-aided diagnosis (CAD) to automatically detect imaging findings, implementation and prospective validation of these tools remains an important subject that has not been addressed in many cases. When discussing AI in healthcare with colleagues, it is important to offer a nuanced perspective - one that rightfully boasts of incredible advances and opportunities while acknowledging the limitations of current technologies and is cognizant of the process required to move from research paper to real-time implementation. Given the number of moving pieces involved, it is important to show healthcare providers that the AI movement in healthcare is something they can and should be a part of. The integration of AI tools will fundamentally alter the workflow of practicing physicians in every specialty, and building new models is only one part of what it will take to build AI tools that successfully improve the lives of the healthcare providers who will work with them everyday. No healthcare provider should feel that AI is a foreign concept built from incomprehensible statistical models - instead, we should educate others about the ways AI will impact medicine and how everyone can advocate for themselves in the midst of this changing environment.
Medical students and physicians in training should expect AI to influence their clinical practice in the coming years. It is essential that we continue to generate interest in AI design and implementation so clinicians ensure the physician-patient relationship remains at the core of each clinical encounter. One of the great frustrations of the EMR is that the computer/EMR can become the center of a visit due to burdensome entry requirements. AI technologies that are designed to keep the patient at the center of a visit rather than the algorithm/device will be much more welcome in the clinical setting. Physician trainees should use their voices to promote the AI systems they think would allow them to give the best care to their patients on an individual and heath system level.
Because the medical profession has always been an early adopter of technology. From heart monitors to ventilators, lab tests and new biologic drugs. X-rays were immediately applied to see bones in the human body. Shortly after nuclear magnetic resonance spectroscopy was discovered, magnetic resonance was used to image the human body. One of the most emergent new technologies today is machine learning – and it is only natural that the constantly evolving practice of medicine will incorporate these tools.
Also, the technology industry is aggressively trying to suggest that machine learning can one day eliminate doctors from the picture. This is unlikely to be the case, but to have a seat at the table, we must understand the capabilities and limitations of machine learning. It may change healthcare delivery in the future, although it will not replace doctors, just as having nurse practitioners has not replaced doctors. However, artificial intelligence tools might play a complementary role in healthcare delivery in the future.
What top 3 concise takeaway messages with specific examples would be of greatest value to share with medical students about AI? To help cultivate a unique medical student perspective and a deeper understanding of the careers medical students will be entering.
Dr. Heine is an AMA champion of moving the needle forward in the discussion around the intersection of health and technology. She is able to amplify your priorities and views. The dialogue has been rich. We would welcome your elevator speech on what is important to you in this area? Professional development and training? Regulation and oversight? Research funding? Payment and liability and ethical guides? Welcome Feedback.
Sylvia, this is an interesting question. We are talking about the synergism of two fields with very different cultures. On one hand, the field of computer science has been relatively unregulated. Innovation is the name of the game. Silicon Valley college dropouts Bill Gates, Steve Jobs, and Mark Zuckerberg are venerated. Only relatively recently, with privacy breaches do we see regulations like GDPR, etc. In contrast, the medical curriculum is highly structured, regimented, marked by standardized exams and licensing every step of the way. We use checklists and clinical criteria, there are numerous regulatory bodies before a new technology can be used on patients.
To merge these cultures, we do need research funding so that innovation can be driven by physicians, scientists, and researchers. Otherwise, it will primarily be dictated by industry and market forces.
Also, a word of caution. Would you be worried if gene-editing tools like CRISPR were freely available for anyone to use? What about microfluidic chips (i.e. Theranos)? I would, because these emerging biotechnologies – while they have promise – need to be carefully designed, validated, and grounded in firm science before they can be unleased for the public. That requires technical expertise. Today, open-source AI software is freely available on the internet and there are many online AI tutorials - now anyone can access AI tools. But I worry about that, because AI tools and health data in rookie hands can be disastrous for both patients and public trust.
AI in health is still a very young technology and we need to focus first on R&D before regulation to make research tools practical.
Dr. Kundu, thank you for this key insight and highlighting a priority area we need to message around. A related theme concerns access to data with appropriate notice and consent models outside of commercial entities to support research as well. I think it is all too easy to gloss over the need for research dollars, but it is shortsighted when we fail to build an infrastructure and capacity in the public commons to support development of generations of innovators.
How do we address the various stages where bias is introduced into AI systems (ideation, design, development, validation, and deployment), and how do we specifically advance equity and fairness? What role can standards or regulation play?
When thinking about this question, two uses of AI in healthcare pop into my mind - Research and business. In research, AI should be subjected to a peer-reviewed process similar to traditional research articles. Researchers should be able to explain the methods they used to develop their model as well as provide demographic information on which the system was trained. This process would allow readers to identify points of bias or short-comings and improve the overall research in AI. Data on underrepresented minorities seems to be a limiting factor in some AI systems, especially in identifying social determinants of health. To alleviate this issue, I believe healthcare, as well as public policy leaders, need to find ways in which we bring healthcare to the community. Some ways in which they may occur are bringing blood pressure clinics to barbershops or using religious leaders.
From my knowledge of AI in business, I don't seem to have an optimistic outlook in transparency. In industry, AI is used to maximize profits for companies. AI can play a similar role in healthcare and can be easily integrated by insurance companies or EMR companies. These companies have an inherent financial motive to have accurate and precise AI models to improve profits, whereas a poor AI model will lose money. However, models can work for 80% of their population and fail for the other 20%. Regulations for AI in healthcare needs to have high ratings of success across communities. The margin of error is small when it comes to healthcare. When AI fails in medicine, people's lives are at stake. I fear that businesses will keep their AI models as business property and not share their information with the public or researchers. In a capitalistic economy, these companies have every reason to keep their models secret. Companies that are using AI models should have an internal audit system, and when failures occur, they need to be held responsible.
Addressing the gender data gap is key to advancing equity in AI systems. We need to not only ensure that our data sets represent diverse patient populations, but also that the teams of clinicians and scientists creating AI technologies bring heterogeneous perspectives to this work. A 2018 Global Gender Gap Report by the World Economic Forum found that only 22% of AI professionals worldwide are female, compared to 78% who are male. We are at an exciting moment in the expansion of AI and innovators at all levels of development need to ensure that these new technologies do not exacerbate existing gender inequality. One way to prevent this is for corporate and educational leaders to intentionally bring together collaborators of all genders to prevent amplification of gender bias when AI products are implemented in the clinical setting.
For anyone interested in the gender data gap in AI and in society in general, Invisible Women by Caroline Criado Perez is an engaging and compelling introduction to this issue.
I'd like to echo Dr. Hock's points above.
National and international standards and regulations are essential and basic steps toward making sure we have processes that promote ethical and practicable AI. But, for the foreseeable future, people will still be developing, marketing, and deciding to implement AI algorithms. In that vein, there's a need to involve individuals from traditionally underrepresented groups and communities; there's a need for stakeholders in nearly every sector investigating AI applications to learn about the societal cost of algorithms that don't work as well for certain groups. Understanding the historical and social forces that bias the data that's out there and that we continue to collect is key to knowing how individuals can have a keen eye for possible downsides as they develop AI technologies. One positive example that comes to mind is MIT's AI algorithm to stratify breast cancer risk that is equally accurate for black and white women. pubs.rsna.org/doi/10.1148/radi...
On the ground, increased awareness of bias in our society and increased representation allow for greater insight throughout the processes of ideation, design, development, validation, and deployment. For example, if we know bias drives racial disparities in maternal mortality, we know what to watch out for and test for--even if something isn't explicitly captured by standards or regulations. Under 11% of men and women in science and engineering jobs are Black or Hispanic.
nsf.gov/statistics/2018/nsb201...
AMA advocates for standards and regulations to mitigate bias and increased representation in our ranks. Assuring shared AI nomenclature, transparency, reproducibility, data privacy, real-world validation, and an eye for bias make for a better AI environment. I'm excited to see AMA's continued work to reach consensus on these issues.
Dr. Hock, all great points. To add to the sources to consider during one of the calls set-up by Ajeet, you brought up FAT ML. We agree it a great resource to explore the risks associated with ML systems. I peruse it regularly given the rich content.
Alex, I agree that the business applications of AI systems that are both proprietary and use black box methods are troubling. We need additional policies and laws to compel greater transparency. The recent Science article detailing the racially disparate impact of the AI-enabled decision support is a prime example. Optum now claims that the system merely aided decisions but did not make final decision — essentially attempting to pass the liability buck to the hospitals and physicians who relied on their software. Optum is now being investigated for violating anti-discrimination laws by the State of New York. We need to have clear expectations and standards that dictate fairness and harmful discrimination is assessed in all of these models.
Sylvia -- thank you for sharing the link to the Fairness, Accountability, and Transparency site. I found the Principles for Accountable Algorithms and a Social Impact Statement for Algorithms (fatml.org/resources/principles...) to be an especially helpful resource -- particularly in ensuring algorithm creators discuss a plan during the development process for potential unintended social harm, including an ongoing monitoring process.
I agree with the above. We have to simulate the real-world environment as much as possible when designing AI tools. For example, a model trained on patient data in Thailand cannot be expected to generalize to patients at the Johns Hopkins Hospital where I work. A model trained on mostly white patients will not necessarily generalize when used in an African-American population. We talk about AI transparency as a surrogate for trust. However, there are many drugs in medicine whose mechanism of action is not “transparent.” For example, lithium is well-validated in bipolar disorder, but its mechanism of action is not well-characterized. So, we need more than transparency policy alone – we need validation over time.
In the current state of AI, I believe researchers or data scientists may be best apt to teach AI to medical students. If medical programs have clinicians who are developing/ using AI in their practice, these would be the ideal teachers. Since AI is still relatively new, researchers and data scientists currently spend the most time working models and data. At my medical school, we have a GUI based data science center that we've developed, eliminating students having to learn how to code. In conjunction with our data science center, we teach students basic statistics where we focus on the most common statistical analysis they'll use. A problem-based learning method can be used to teach students AI application. I think this is a great way to ease students into AI/data science. We are working to get our data science curriculum up and running for our medical students. I'd be happy to comment on our experiences implementing this curriculum in the future. If students have a more deep-seated passion for using and developing AI, they should enroll in an immersive course where they learn the nuances of AI and learn how to code.
Ideally, teaching medical students about AI should come from a team of data scientists, clinicians, ethicists, and hospital stakeholders. Each has a significant domain expertise that is crucial in designing, developing, and implementing AI in a medical setting.
In thinking of a curriculum for AI in medicine, I need to give another shout out to the Duke Institute for Health Innovation (DIHI). They have had a fantastic vision for introducing AI to the next generation of physicians and their scholarship year is more or less a curriculum for AI. Some important parts include:
1) Understanding data sources - this can include discussions about EHRs, imaging, etc.
2) Basics of data science - review the data science workflow and experimental design, introduce technical skills to cover data exploration, cleaning, and preliminary analysis. This does not have to be a technical deep dive, but some basic tools to understand data sets better.
3) Predictive analytics - understanding the difference between different types of learning (i.e. supervised/unsupervised/reinforcement) so that there is a mental framework when assessing clinical problems and possible solutions.
4) Designing a study - most effectively done by actually being involved in interdisciplinary projects. Through DIHI, I was able to be engaged in 2-3 projects throughout the year and better learned how to technically design an experiment and align with the right clinical groups to have the most measurable impact.
The Duke Program is great. Are there other examples and what can be done by other medical schools and medical systems to increase knowledge.
What should physicians consider when evaluating software for adoption into their practice particularly if it is marketed as using "AI"? What, if anything, do we need to know about the algorithm(s) powering the application? And, should there be a standard, dynamic way of standardizing the information in order for end-users to understand deployment requirements and limitations?
As physicians and life-long learners, we should be trying our best to remain up-to-date on the latest breakthroughs in medicine. If we are going to be using software a regular basis in our practice, then it would be ideal to have published research on its efficacy which we could interpret and critically analyze, similar to clinical trials assessing drugs or other interventions. However, there are several limitations with AI that could make this challenging. As opposed to traditional statistical models whose covariates are carefully selected for a clinical study, many of the algorithms used in AI are “black box” in that we do not fully understand why or how the algorithm is coming to a particular decision. Moreover, these algorithms are not static but are capable of constantly evolving with new information when applied to new patient populations, and thus there can be this never-ending process of continual refinement and optimization. In addition, using randomized controlled trials to assess breakthroughs in AI, which tends to be a fast-paced and disruptive industry, could hinder technological development and lead to an inability for healthcare to keep up with present-day standards.
Regardless of these limitations, I believe that from a broader point of view, we should be doing our best to ensure the integrity of any algorithms that are used for medical decision-making. The extent to which this will be performed by regulatory bodies remains uncertain; more focus may ultimately need to be placed on post-market monitoring as opposed to early development.
I also believe that there should be a standard benchmark against which multiple algorithms could be tested. This would likely be domain-specific given the wide variation in algorithms in health care – for example, for assessing the quality of advanced processing algorithms in neuroimaging, developers have created a benchmark tool with standardized performance metrics: ncbi.nlm.nih.gov/pubmed/266612...
What were some of the greatest challenges you faced in the course of your work on AI? What strategies did you develop to help overcome those barriers?
One of the greatest challenges that our team at DIHI faces with our machine learning projects is the implementation piece. Healthcare systems have a difficult time with change mostly due to the sheer effort it takes to disrupt systems of that size. One of the ways our team at the Duke Institute for Health Innovation (DIHI) works to make this process smoother is by identifying all of the correct stakeholders and inviting them to be a part of the project development. Examples of this include working with rapid response team nurses to better triaging software and spending time with bed flow management to understand how open beds are triaged. This type of on-the-ground integrated work is absolutely necessary when trying to gather buy-in around new interventions.
One of the biggest challenges I've faced with working with AI is my lack of computer science or statistics background. In college I majored in neuroscience and biology and stayed along the pre-med track where I did not learn how to code or take extensive math classes. In medical school, my studies were focused on pathology, pharmacology, and physiology of diseases. As many people may have a similar background as myself, it felt very daunting trying to figure out where to start learning AI.
These are my tips to individuals who want to be apart of AI. 1) If you want to gain a basic understanding of AI, watch youtube videos on AI. 2) Figure out what role you want to play in AI. Do you want to code? Would you rather develop and interpret models? 3) Reach out to individuals or research teams that are working with AI to be immersed in the culture. I honestly felt out of place at first, but I realized that the medical knowledge I brought to the table was extremely useful to the team. If you stick with it, you will get better, and you will grow.
Although various forms of AI have existed for many years and are ubiquitous in certain sectors (finance, marketing, etc.), practical integration of AI into the workflow of healthcare providers remains a relatively nascent project. Compared to other fields where AI has been a staple for some time, protocols and pipelines that have long since been standardized and tested in other domains have not yet been fully adapted for the needs of medicine. In particular, one of the greatest issues facing AI in the medical field today is simply the dearth of usable data. That is not to say that the data does not exist - electronic medical record systems have cataloged incredible amounts of unstructured text, imaging, and structured data with the potential to unlock countless insights and improve our collective predictive capacity. While AI, and especially modern deep learning approaches, appear to work as if by magic, the truth is that they cannot create useful mappings without high-quality, labeled datasets. In the medical field, we are only now starting to unlock the kind of domain-specific tasks that are worth answering. With these tasks well-defined, the next step is the collection and curation of large datasets that can be adapted for use by current machine learning architectures. Unlike other datasets, such as the famously expansive ImageNet dataset that catalyzed the development of modern deep learning approaches, proper labeling of medical datasets require niche expertise as well as additional fail-safe measures for ensuring dataset quality due to the high cost of machine error. Furthermore, existing regulations regarding medical data acquisition, storage, and management create large barriers to entry and significantly slow the process of transitioning from clinical question to model production and application deployment. In order to address these challenges, well-defined and easy-to-use pipelines for managing, storing, and labeling data will be of paramount importance.
As a busy resident, finding the time to learn the skills necessary for working within the constantly-evolving field of AI has been challenging. While I have elective weeks every so often, there are no built in blocks in my schedule that would allow me to easily take a structured course on deep learning, for example. And while I could take a dedicated year to perform research and take this coursework, the time available to me to actually carry out the actual research would then be quite limited. Similar to Alexander’s post, I first turned to Youtube videos which actually provided a great introduction to concepts that I wanted to learn. Thereafter, to gain a more detailed understanding, I took a few relatively short but high yield online courses, which were quite affordable actually.
With regards to actual implementation, the lack of high quality data and the labor-intensive process of labeling data is a notable barrier. As a medical professional, my access to medical data is better than someone who does not work in the healthcare setting, but nonetheless obtaining large datasets (which are required to develop a robust model) can be challenging. There was recently an interesting article about a Stanford physician who was thinking of innovative ways to address this issue (link here: arstechnica.com/tech-policy/20...). Questions of data ownership and privacy have traditionally limited any widespread sharing of medical data (and for good reason), but to improve the delivery of healthcare we must be able to find a sustainable way to overcome these obstacles.
Lastly, from an even broader perspective well outside of my own personal projects, the deployment of AI models within a field whose technology adoption rate is rather poor relative to other major industries is also a significant challenge. The authors of this paper in the journal “Nature”: nature.com/articles/s41746-019... describe an “inconvenient truth” in that the simple availability of technology capable of improving the delivery of healthcare may not be sufficient to overcome existing political and economic factors and current practice norms in our fragmented healthcare system. In addition, validation of existing algorithms is required every time such a model is deployed in a system whose patient populations were not adequately represented during algorithm development.
I have been able to appreciate the challenges both at a personal and at a systems level.
Personally, the biggest challenges for me coming from a medical background were to 1) understand how to design AI projects/experiments; 2) gain technical skills and experience. I was fortunate to work with the Duke Institute for Health Innovation (DIHI) which allowed me to take a systematic approach to overcoming these challenges. First, I was able to start the year learning the basics by working with online courses and also with technical experts part of the DIHI team. This allowed me to gain the necessary CS skills and also an intuition from more experienced coders. Then, over the past couple years, I have been able to join multiple projects through DIHI which have allowed me to try implementing what I learned and solidify my understanding. It has been immensely valuable to put my learning into practice.
At a systems level, the main challenge I see is implementation in the hospital. This is an aspect of every project DIHI is sure to identify and address early and often. Important questions to ask are: 1) who will be using this AI tool and have we directly understood their questions and concerns (nurses, doctors, patients, etc.); 2) which stakeholders have the power to implement change; 3) how should the tool be used (if tool shows this -> then what should happen?) - it is important to have these deliberate discussions for successful incorporation.
Access to standardized patient data took longer than expected. It is important to work in a team that includes certain members with clinical expertise, others with data science expertise, or people with both!
Implementing change and conditions of deployment is an essential component of safety and efficacy assessment is what we have heard from most experts in this space. Your comments are important to underscore and share broadly.
Wasif, you have waded into the great debate we have heard from those who advance the views you have outlined and those who think large, “messy” (meaning quite possibly incorrect/incomplete etc.) are sufficient for clinical applications. You have provided an extremely helpful insight.
My medical school did not have an AI component in the curriculum. Luckily, I had a solid electrical engineering background prior to medical school and was able to specialize further during my PhD. I think practical projects are the best way to integrate AI into the curriculum and learn how they work!
The concept of AI was not addressed formally within my medical school curriculum. Being an interest of mine and wanting to learn more, I applied for a medical school scholarship program run by the Duke Institute for Health Innovation (DIHI). This institute works at the intersection of technology, innovation, and healthcare by introducing implementation projects each year to Duke’s health system (dihi.org). Each project is developed through an annual grant program and project teams are designed around the issue and proposal. A diverse team of experts, including healthcare providers, software engineers, data scientists, project managers, medical students, computer science students, Ph.D statistics students, population scientists, is then built around the project to maximize success. I think that this internally embedded method and structure is an amazing way to introduce new ideas into the health system and stands as an example for other academic institutions to follow. Furthermore, by integrating many different types of students and professionals to work on projects together, DIHI is preparing not only future physicians for this type of work but future data scientists and statisticians as well as existing healthcare providers.
Unfortunately, my medical school did not incorporate AI into its curriculum. I have a personal interest in AI, where I was able to build my skills during a year of research. My school now recognizes the importance of creating data scientists and is in the process of creating a certification process for medical students. Ideally, an interdisciplinary team of biostatiscans and/or data scientists, computer scientists, and clinicians familiar with AI would be needed to create an effective curriculum. Each component will play an essential role in writing the code, developing the models, and framing the model to find a clinically relevant answers or conclusions.
Like my fellow trainees, above, I did not receive any formal education on AI in my medical school curriculum. I studied psychology as an undergraduate and have more experience with survey research than coding. Fortunately, working with faculty engaged in AI work during residency has greatly improved my understanding of its potential impact on our lives as clinicians.
I think all medical students would benefit from a formal introduction to AI and its emerging clinical applications. The opportunity to collaborate across disciplines on AI projects as a medical student as Kristin mentioned above seems like it could be a transformative experience for students with deeper interest.
Very interesting insights and responses. If you are interested in the intersection of AI and medical ed, please view mayamd.ai/mayaedu/ when you have a second. We'd love some of your help if interested. It seems as if some schools have embraced AI in various specialties but not very broadly generally speaking. Hence, medical schools still seem to use their old model. Personally, i think that fostering an environment of innovation & learning is critical for today's students to keep them engage, but there has to be a balance too (like the best of the old with the best of the new.) After all the end goal is to produce the best trained clinicians that can quickly assimilate into our dynamic digital world.
My medical school (University of Miami) did not have a formal curriculum on AI, but it does offer a combined 4 year degree with a MD and MS in Genomic Medicine. That Master's degree has classes that are part of it that include data science focused courses.
I am not convinced that a medical school should have an AI specific curriculum. I feel that AI will naturally become incorporated into the medical school curriculum based on its importance in medicine. I think the critical question is whether medical students should be taught to do the actual AI development or just learn how to interpret AI output and utilize it. For example, to me all the equations and algorithms on MD Calc are examples of AI, and most medical students and residents use that on a daily basis. Any time a genetic test is done, a report comes back from a company about whether any mutations are found and whether those are pathogenic and should be concerning. That is also an example of AI. To me output of AI will ultimately become so incorporated into medicine that a specific curriculum really won't be necessary. So, in terms of teaching medical students, residents and young physicians, I think that the best way to teach them about AI is how AI is already being utilized by them or how it will be in the near future.
We appreciate all of our expert panelists. Thank you for joining the discussion. There is a tremendous amount of hype about AI. What did that term mean when you first heard the term and has your view of its meaning changed? What are key systems, methods, and terms that need shared meaning so we can ensure we are having an apples-to-apples conversation about AI policy?
You are right Sylvia – artificial intelligence is an incredibly broad field. The Turing machine was invented over 70 years ago. (In fact, the modern CPU is based on the Turing model of computation). The field of artificial intelligence emerged not long after that. Broadly, the field of AI focuses on creating systems that can take input from the environment and produce actions that can mimic the cognitive outputs of human intelligence. AI has actually been around for many years right under our noses, from Siri to Pandora to Amazon. Medicine is in some way experiencing the boons of AI later than in other fields.
Terms like deep learning and machine learning and artificial intelligence are often conflated in the popular media, so let me take a moment to clarify. AI is the broadest umbrella. Machine learning is a sub-field within AI. Neural networks are a type of machine learning algorithm. Deep learning is done using a special type of neural network.
I first thought of AI as sci-fi: some computer that could independently decide to act and take over the world. Now, I think AI means a set of algorithms that perform effectively under a given environment. These can be heuristic-based. But most of the excitement is around learning-based methods that find patterns in existing data.
Methods and terms that need shared meaning are:
FAIRNESS
What do we want fair and unbiased AI to look like in healthcare? There are at least 21 mathematical definitions: docs.google.com/document/d/1bn...
TRANSPARENCY
What sorts of explanations are needed? What kinds of questions do doctors and other care providers need to be able to ask from an AI? The field of explainable machine learning is only getting started, and may not answer the questions we'd like answered. (christophm.github.io/interpret...)
RISKS
We need to assess the full sets of risks and costs of implementing an AI in practice. We are learning that algorithms can be fooled by changing even a single pixel. We are learning that they can perform worse for populations they were not trained on.
It will probably take dedicated specialists to monitor and maintain installed systems. Excessive computation for some types of AI models can also use unnecessary amounts of energy and produce substantial carbon emissions.
ADDED VALUE AND BENCHMARKS
There is a burden of proof that AI will improve care. Models are often compared against trivial benchmarks. Any deployed algorithm should beat valid and reasonable benchmarks to justify their implementation (arxiv.org/pdf/1707.06289.pdf)
WHAT ALGORITHMS COUNT AS AI?
Methods as simple as logistic regression and as complex as deep learning fall under the typical paradigm of machine learning. Some models are more well understood as others and may be regulated differently.
I first discovered the concept of AI watching I, Robot and Jeopardy, kind of like Jonathan! When I noticed people begin to talk about AI in normal social situations, there seemed to be an air of fear around it. When Watson won at Jeopardy, I was impressed—the victory seemed to come with very intentional hype. I had no idea how AI worked or what it was. To me, AI seemed inevitable and out-of-reach.
Working at AMA now, though, and taking a deep dive into AI policy issues, I have developed a sense for different use cases, the levels of AI sophistication, and its real pros and cons. One of my projects has been to assist in our AMA’s effort to develop a set of shared definitions for AI methods and sub-classifications. We have found that AI stakeholders often are not speaking the same language when it comes to the technology. One person discussing AI may be referring to a complex rule-based algorithm, while another might take AI to exclusively refer to algorithms involving methods applied to data itself to learn.
One of my ongoing projects involves compiling key terms and definitions from different standard-setting bodies and national think tanks. I like the simplicity of your explanation above, Shinjini!
And since we have so much expertise right here on this forum, please do share various definitions and terms that you associate with AI and machine learning.
Immediately below, I’m linking our working document, but I invite you to post definitions and share your preferred sources.
drive.google.com/open?id=1o28p...
How can we leverage the power of AI to improve education across the continuum of medical training? What tools could make education more individualized or could improve feedback processes?
That is an excellent question Dr. Lomis! I often think about this topic and how can we personalize medical student education so that students learn more effectively. Personally, I feel that having students use resources based on their learning styles could make their learning more efficient. Once the proper learning style of the student is identified, AI could be used to compile a list of resources that have helped similar students with that learning style succeed. Also, AI could be used on a per-school basis to predict a student's success on board exams and if intervention is needed - ultimately helping the student.
Agreed. I am also curious to think how it may help with clinical education, since we struggle to ensure each learner gets the right mix of exposures and experiences to develop their competency.
To be even more specific, while there is a lot of excitement about AI right now, I believe that computer vision in particular is a task which current algorithms are most optimized to perform. Of all the potential applications in AI, I think that it has one of the greatest potentials to change the current manner in which we practice and learn medicine.
With regards to medical education, students often want to absorb as much information as possible, including all the subtle details that might be overlooked upon first glance. By using algorithms which are proven to better as good as or better than pathologists, for example, one could generate an accurate label of every cell on an entire digital pathology slide. Students could then scan across the slides, continually testing themselves against ground truth labels as opposed to being restricted to a single image in a textbook that is only partially labeled (and often only for more obvious elements in the image). Similarly, for radiographic images, algorithms could be used to label all structures with high confidence, and such software could potentially even be applied to radiology images that medical students are encountering during their clinical rotations. I have sat down with students many times to walk them through neuroimaging for patients that we have on service, but my time is limited and such images often take a significant amount of time to study and learn from. With such software available, however, the students’ learning could be reinforced on their own free time. Another example outside of computer vision - natural language processing could be used to reinforce associations between various topics in medicine, providing more contextual information for a particular topic and thus making it easier to remember.
Going off of Alexander’s response, software receiving constant input from an active learner could then identify situations/topics that the student needs more practice with, and then could generate appropriate learning scenarios for the student. Overall, with AI, I believe that students could be more engaged while learning and thus could have a more tailored educational experience.
These are great ideas.
By the way Dan, it is great to see you are still leading in this space!
I really like those ideas Dan! I love the idea of developing training tools that can label imaging for education, especially for orientation to imaging intensive rotations like orthopedics, neurology or neurosurgery. I remember spending quite a bit of time this year just looking for good reference images with accurate labels to learn the basic anatomy, which are often lacking. But, if there was a program that could provide reference labels on the imaging for us to test ourselves, I think that would go a long way in smoothing out the process for new students learning the fundamental anatomy in a rotation.
Another idea I have considered in the past is applying deep-learning models to tools that specialize in spaced repetition (like Anki) to more accurately determine your strengths and weaknesses in a more flexible fashion.
Finally, an area that I think would be interesting is applying AI models to attempt to quantify clinical skills in an objective fashion (possibly in combination with virtual reality) to provide proceduralists with real time corrective feedback on how they perform tasks (suturing, line placement etc.) instead of the traditional subjective methods.
I really like both ideas suggested by Dan and Stephen regarding AI-based radiographic image learning for medical trainees. As a resident, there have been times I have found myself wishing I had a tool for individual practice diagnosing common radiographic findings. An Anki-like tool that could be available as a refresher with spaced repetition feature would be a great resource to those who are rusty on these skills at any stage of training and don't have easy access to formal instruction.
This is such a cool thread! I hope the idea that AI techniques have the capacity to tailor and personalize med ed really catches on!
To throw out a few more ideas: I love that you mentioned Anki, Steven! Med students definitely see that more and more, medical education is happening on questions banks like UWorld, too. The amount of content budding and practicing clinicians have to learn is, of course, not slowing down at all (e.g. new CF drug and measles effect on antibody titers just this week!). ncbi.nlm.nih.gov/pmc/articles/... It would only take 29 hours a day to get through all the medical literature according to estimates from the early 2000s.
Oftentimes, medical students across the country spend time answering the same questions (or using the same Anki decks). That's a lot of performance data. Through the application of a little AI, we could feasibly have enable a setting in these apps whereby our performance on a given set of questions or cards sends us down a tailored path based on those concepts we are likely to struggle with, maximizing our study time (and hopefully increasing time for wellness :) ).
Importantly, it takes a lot of study time to gather and organize relevant information and notes (especially for those who watch lecture videos). Some people I know then turn this information into questions for themselves. While this process itself can be useful, applications of natural language processing (NLP) could automate note-taking, organization, and ultimately creating practice questions. This would be analogous to the NLP technology ambient clinical intelligence that may save clinicians hours by summarizing clinical encounters in EMR.
Perhaps we could spend more time finding resources to feed this algorithm to easily create personal Qbanks that reflect our interests and sticking points.
I think there's a lot of potential to game-ify med ed like this as well.
You all have identified important ways to innovate in order to educate. What infrastructure/platform do you need to support ideation (like you have done here in this thread), design, development, validation and deployment leveraging ML and other AI systems while still driving accuracy and validity of content?
Thanks Steven, Lauren and Hari for keeping this thread going. Great thoughts!
I like that NLP came up - any potential applications to help track your clinical experiences and process feedback? There’s been interest in focusing clinical assessments more around narratives (would be easier for assessors than long forms). If we could capture lots of quick, real-time comments from voice memos, would any tools be able to sort for competencies and perhaps even let assessors know if you need more feedback in a given area?