Augmented intelligence (AI) in health care is complex and changes in this space are rapidly evolving. This discussion is one of the initial steps in the AMA’s efforts towards educating physicians about the changes AI systems and methods will likely bring to clinical practice and how they can leverage them to support and achieve the quadruple aim of health care. Experts in the field will cover the basics of AI including terminology and trends, share use cases, and address key public policy issues.
What are the costs of implementing augmented intelligence/AI systems and methods in practice? How could (and should) these costs be funded?
How can augmented intelligence/AI impact that quadruple aim of healthcare?
So glad you brought this up! AI, as Cathy O’Neil describes in “Weapons of Math Destrction,” has the capacity to show what GIGO (Garbage In Garbage Out) with racial and gender biased training datasets has done in criminal justice sentencing, hiring policies and more. Silo’d healthcare training datasets do not necessarily represent the diverse population of our planet: research subjects are often male, Caucasian and young; other training datasets represent the privilege of healthcare access, race, age (while the baby boomers with comorbidities, aged 65+ are on the visible horizon), urban residency, etc. The ability of AI to scale existing inequities is monumental. Perhaps it is time to evolve to a Quintuple Aim - explicitly adding “Equity and Inclusion” before the AI-tsunami in Healthcare.
For those new to this, the Quadruple Aim stands for: improved patient care, improved patient health, lower cost, and clinician satisfaction.
Let me explain by starting from the principles of Safety, Efficacy and Equity for autonomous AI. We developed these with FDA, and thus validated, implies that the clinical decision is made by the autonomous AI without clinician oversight.
Such autonomous AI allows patients better access, lower cost and higher quality of care - the first and third aim. This opens up the potential for underserved populations to have improved access to care – the second aim – which is what occurred in New Orleans when they adopted an autonomous AI.
After hurricane Katrina, there was huge problem getting timely eye care for people with diabetes; since adopting autonomous AI month-long wait times have been reduced to a day.
npr.org/sections/health-shots/...
Finally, the autonomous AI in this example allows ophthalmologists to focus on treating those patients who need it, and allows primary care physicians to better care for their diabetes population - the fourth aim.
I agree with Dr. Abramoff’s comments on all four of the aims. I would like to comment further on the fourth aim: health care worker satisfaction, focusing on the physician. I am optimistic that in the future, physicians will embrace the role of AI in their practices, particularly once the benefits of the other three aims become apparent and proven.
However, getting to that point will require acknowledgment of the stresses this new technology could place on physicians. Considerable data shows physician burnout is increasing and some, if not much of that, is perpetuated by electronic interfaces, especially the EHR. Integrating AI into physician practices with proper physician input will help ensure that all aspects of the quadruple aim are satisfied. It will help make sure physicians embrace the technology.
What are the issues that will concern physicians? Physicians should not be penalized for not using this technology while the complex questions surrounding standards, usefulness, validation, regulatory oversight, liability, confidentiality, and the like are evolving. Along that same vein, the use of this technology should not be mandated for licensure, credentialing or payor contracting at the present time. At least not yet. Likewise, payment should not be tied to these technologies until proven.
To be fair, physicians will need to step up and bring forth their questions and concerns. And take responsibility for helping address them, a role which the AMA is well-positioned to facilitate.
I fully expect that the challenges I pose will be answered in the short term, and the benefits of AI will become obvious. At that time, physicians will comfortably embrace AI into their daily practices, as will patients. But, during the current transition phase, it is important that physicians’ concerns are recognized and addressed.
Doing so will expedite, not hinder, the diffusion of these exciting technologies to patient care.
I echo my colleagues' comments on 4+1 aims and fully support our responsibility to transition physicians who have not been habituated to the high-tech environment, particularly as their knowledge, expertise and years of practice is worth gold in the clinical environment.
I would make an additional comment. Beyond the quadruple aims, we also need to incorporate principles of sustainability conceptualized by the more general triple bottom line (profit, people, planet). We're seeing the concept of B-corporations representing better social responsibility. These are all tied together. Environmental conditions and respiratory diseases (many more examples obviously). Stress (worsened by socio-economic conditions) and cardiovascular disease. There is luckily a large push of multiple entities to address social determinants of health particularly in view of their impact on health inequalities.
As much as advances brought by AI and technology can improve the quadruple aims, we should also place a great emphasis on concepts of the triple bottom line as the interaction between the environment we live and the socio-economic status of the population will greatly influence the health of patient we care for at the least, and our future on a greater scale.
Ref: hbr.org/2018/06/25-years-ago-i...
bcorporation.net/about-b-corps
“As debates about the policy and ethical implications of AI systems grow, it will be increasingly important to accurately locate who is responsible when agency is distributed in a system and control over an action is mediated through time and space. Analyzing several high-profile accidents involving complex and automated socio-technical systems and the media coverage that surrounded them, I introduce the concept of a moral crumple zone to describe how responsibility for an action may be misattributed to a human actor who had limited control over the behavior of an automated or autonomous system. Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions. While the crumple zone in a car is meant to protect the human driver, the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. The concept is both a challenge to and an opportunity for the design and regulation of human-robot systems. At stake in articulating moral crumple zones is not only the misattribution of responsibility but also the ways in which new forms of consumer and worker harm may develop in new complex, automated, or purported autonomous technologies.”
Source: Elish, M. C., Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (pre-print) (March 1, 2019). Engaging Science, Technology, and Society (pre-print). Available at SSRN: ssrn.com/abstract=2757236 or dx.doi.org/10.2139/ssrn.275723...
You ask a great question, and let me start on a personal note. When it came out in 2007, the following study was an eye opener for me:
nejm.org/doi/full/10.1056/NEJM...
I had been working on autonomous AI for years as a physician scientist, working on validation and establishing safety, and efficacy. Fenton et al, in this stunning study, showed the crucial importance of evaluating AI's true clinical usage and workflow integration, rather than AI in an artificial, laboratory setting.
FDA had cleared an AI for assisting radiologists to evaluate mammograms, based on studies where the AI performed really well to radiologists reading the same mammograms.
But Fenton, et al, compared actual clinical usage of the AI, where it assists a radiologist, to the radiologist without AI, in a clinical trial setting. And they found that the diagnostic accuracy of the radiologist without AI was better than that of the radiologist assisted by AI. In fact women were better of with the radiologist without AI.
As Elish also shows, the interaction between clinician and AI therefore cannot be predicted from studying the AI in isolation.
This study strengthened my conviction that it is easier to anticipate this interaction with autonomous AI, and also, how important it is to validate AI in the context in which it will actually will be used. For example, for IDx-DR, which is intended for autonomous diagnosis of diabetic retinopathy in primary care, we performed the clinical trial in real primary care clinics, where it was operate by existing primary care staff, in primary care appropriate clinical environment and space, and with primary care patients.
Thus, we tried to anticipate as much as possible such effects as ophthalmology clinic based patients being more comfortable with retinal imaging, with clinical operators typically being highly proficient in imaging the retina, where the imaging room is typically dark, etc.
This is a complex question. And it deserves addressing from different angles.
We do have a professional responsibility at the front line towards our patients. We are responsible for our actions and those we supervise. That is likely to remain.
From a system's approach, however similar to a swiss-cheese model on the risk management side, error or near misses need to evaluate the problem from a broader perspective. AI, technology and the interaction between human and technology add another level of complexity as part of the man-machine interactions (which has its own body of literature).
What these issues call for will be another look at the ethical and legal framework surrounding the therapeutic environment (patient-physician-machine/data). Where will responsibility fail with a negative outcome and a failure of technology? I would argue that new rules need to be placed to protect both the physician and the patients in that type of setting. It would be interesting to see what Sylvia, Kimberly and Ashley's thoughts are on this from the perspective of the law.
I agree with Dr Abramoff that validation is the key. I would add, however, physicians who are not developers will need the help of editors and specialty organizations and guidelines crafters to help identify where these programs are in the patient's best interest.
Dr. Grenon, I will focus on clinical applications of health care AI, but the following applies to other applications (health administration and business operations, for example) as these also impact patient health outcomes. The position of the AMA has been to ensure that public policy incentivizes rigorous clinical validation and appropriate, meaningful disclosures of not just benefits, but limits of AI systems. But the bedrock is that those who have knowledge of AI system risks and are best positioned to mitigate those risks, must be incentivized to do so. The AMA points to the first FDA de novo authorization of an autonomous AI system--IDx-DR system. The developer has taken medical liability insurance. This is both a legally correct and prudent step to take, but sends a clear message to physicians, patients, and payers that marketing is not simply hype. An important indicator of whether a AI system is simply marketing hype or performs as marketed, will be whether the developers acknowledge the liability that they must legally take steps to address. Shifting risk (and liability) to physicians and patients should not be allowed as it creates the moral (and liability) Crumple Zones that Sonoo highlights in a separate post.
What are some key terms and definitions that physicians and the healthcare community should understand related to augmented intelligence and AI?
Machine learning (and its more robust and specific type, deep learning especially for medical images) are not synonymous with artificial intelligence but are often used interchangeably. Machine learning as well as deep learning are AI methodologies. AI, however, does overlap with data science and mathematics with statistics. In short, a data scientist in the present era is expected to be an all-around data miner, data analyst, mathematician, and statistician as well as someone who is facile with AI (including machine and deep learning).
Other AI methodologies include cognitive computing and natural language processing as well as computer vision, robotics, and autonomous systems. Cognitive computing (as exemplified by IBM’s Watson cognitive computing platform) can involve a myriad of AI tools that simulates human thinking processes while natural language processing (NLP) involves connecting human language with computer programmed processing, understanding, and generation. Robotics in its impressive panoply of forms is considered part of AI as well as its related autonomous systems (in the context of AI not IT).
It is perhaps reasonable to think about AI as a “symphony” of tools, and you being the composer and/or conductor can put various music instruments together to realize the music you composed and envisioned. For example, there are many AI tools that are combinations of these elements, such as machine learning and NLP combining for robotic process automation (RPA) and chatbots or NLP aligning with machine learning for cognitive computing.
I think for physicians, key is being able to understand claims made about AI.
For example, 'Transparency and Accountability for the Safety, Efficacy, and Equity of Autonomous AI', is a handful of terms that are important to understand:
Transparency: availability of scientific evidence of the design and validation of the autonomous AI to the highest possible standards. For example,during validation, the clinical decision of an autonomous AI can be compared to a single clinician, to a group of expert clinicians, to a proxy for patient outcome, or to patient outcome. Each of these forms a 'truth' or so-called reference standard, but the level of (external validity) differs from worst to best.
Accountability: willingness to accept liability for any wrong medical decisions that the autonomous AI will make. For example, my company assumes medical liability and has medical malpractice insurance for its autonomous AI.
Safety: patient safety in the broadest of terms, and especially with respect to clinical patient outcome. In clinical trials this can be measured as 'sensitivity' to a reference standard, and can and should be addressed both in design and validation..
Efficacy: are their gains in terms of efficiency for the patient or healthcare system, lower cost etc. This can be measured in clinical trials as 'specificity', and can and should be addressed both in design and validation.
Equity: do the safety and efficacy apply to all patients, and not just to a subgroup, age, race or ethnicity. Also: is the diagnosability sufficient, in other words, does the autonomous get a valid clinical decision on the vast majority of patients, and not on a small subset. Equity can be addressed in both the design and validation.
Current trends of AI use in medicine and healthcare are mainly medical image interpretation and decision support. Deep learning, especially convoluted neural networks, has been a major contributor in the recent surge in use of AI in medicine. A few subspecialties have been more aware of this trend, especially radiology, ophthalmology, dermatology, pathology, and cardiology.
The challenge with looking at the advances in A.I. in healthcare is the speed at which new applications are arriving. Here is a summary of advancements in artificial intelligence in medicine and healthcare only from the past few days. I think it demonstrates how hard it is to keep up with the changes and also it, I think, unnderscores the notion that what we need more and more is context around how it will be implemented in everyday care.
A.I. model can find breast cancer earlier and eliminates racial disparities in screening. engt.co/2Wv1F2L
Machine Learning Predicts Kids at Risk of Not Getting Vaccinated bit.ly/2WqINBQ
Machine Learning Helps Design Complex Immunotherapies bit.ly/2Emvyei
Machine learning overtakes humans in predicting death or heart attack bit.ly/2EldW2s
A.I. Took a Test to Detect Lung Cancer. It Got an A. nyti.ms/2EnQ3Yb & bit.ly/2EmA3FY
The question of use cases is an important one. First off, it is useful to define what the term “use case” means. In terms of software engineering, Wikipedia defines a use case as “a list of actions or event steps typically defining the interactions between a role (actor) and a system to achieve a goal.” Within the context of clinical care, use cases define the specifications and requirements to complete specific autonomous tasks within a broader IT environment. The use case defines the conditions which the algorithms must execute consistently, reliably, and safely. Basically, scenarios to improve medical care.
Dr. Chang nicely describes some of the specialties engaged in use case development, including radiology. The American College of Radiology Data Science Institute (DSI) has been very forward-thinking in its approach to use case development, through their Technically Oriented Use Cases for AI (TOUCH-AI) platform. The DSI has created use cases across multiple anatomic systems, such as abdominal, cardiac, musculoskeletal, and others. The cases include considerations for data set development and technical specifications to help AI developers.
For those interested, TOUCH-AI is worth exploring: acrdsi.org/DSI-Services/TOUCH-...
Kimberly- great question. I would look at addressing this question in 3 parts:
1) How far can AI go (the extent of possible use cases): I would say as far as the imagination can take it around solving for a problem in health care or the creation of a new problem to solve.
2) Where is the current need? I believe solving for issues related to workflows for physicians is a priority so that health care providers can concentrate on the patient encounter. That would likely weight favorably for issues related to physician burn-out. An example of that would be a voice-recognition system (digital scribe) that populates the EHR automatically. A different aspect is to derive algorithms to predict events so that patients who are in need of attention get assessed and treated prior to deterioration of their health. Hence, needs exist around data extraction, efficiency, prediction, clinical decision support tool, improved outcomes, cost reduction, etc.
3) Different applications have been compiled by scientific journals and consulting firms. My colleagues have posted links above. For the more administrative side of health care, this reference may be beneficial: accenture.com/_acnmedia/PDF-49...
The above are all excellent resources.
I have had the privilege of chairing meetings called Artificial Intelligence in Medicine, or AIMed, around the world as well as across subspecialties. The website with meeting listings is also a multimedia resource (AI-Med.io) and has a monthly academic magazine, videos of previous meetings, and our extensive eBook with a glossary (to be published in a more complete form by Elsevier later this year).
The question of medical education in the world of AI is quite relevant. On a superficial level, the answer could be: teach learners what AI is, what the terminology means (see other discussion thread in this series), and how to interface with it.
But the needs go much deeper than that. Physicians in training will need to learn how to manage the output of AI applications. In a world of expansive data on each patient, AI will inform diagnostic decisions and therapeutic directions. It will become the responsibility of the physician to evaluate metrics like probability, confidence, risk-benefit, sensitivity, specificity, and the like. These terms are not new to medicine but may have different meaning when informed by a continuously learning system.
A related concept will be identifying bias – more likely this will be in the output of the algorithm but could also be present in the training or validation data. Physicians will likely be the first to identify this in practice, informed by the patients they see.
Lastly, communication with patients will be different in an AI environment. Patients will expect a physician to help them understand the output from a digital interface and translate it into terms which have meaning to them and their families. And when patients decide not to pursue a different course than the machine recommends, choosing intuition over big data, the physician will fall in the middle of that discussion. That is a challenging place.
It is that place that differentiates humans from the machines which augment them.
I fully support Dr. Silva's comments here and also believe that a new skillset and framework of thinking will be required. Digitalization and AI are occurring in parallel with an exponential growth in scientific knowledge and change in the healthcare landscape towards value-based care and contracts. With that in mind, the following may be useful to prepare clinicians for the coming changes in their practice:
1) Creation of a critical framework around the evaluation of new technologies, digital therapeutics or AI applications, particularly with regards to their application in the diagnosis and management of disease. Should we evaluate based on the science (i.e. peer-reviewed publications), validated outcomes (who validates?), patient engagement or even market traction (several of these will likely be direct to consumer)? Will the process be similar to what we have known so far in more traditional routes or will the market validation play a larger role particularly with more direct channels to the end-user? What will become the better source of truth (pubmed vs society guidelines vs data aggregator platform vs another new concept)?
2) Understanding of reimbursements systems and the role played by these new technologies in decreasing costs and improving patient outcomes, particularly as we venture more in value-based care. In other words, the clinician will need to become familiar with the economics of the new solutions in the face of multiple players (patient, physician practice/health system, payer) and the impact on their practice.
There is already a lot to be learned in the traditional medical school curriculum. Providing a framework of thinking around some of these concepts will hopefully help support best practices in the face of a multi-level transformation our system is undergoing. And the concepts will obviously need to be addressed with an unwavering goal to support the best health of our patients.
As AI-based technology becomes more integrated in healthcare, medical education must offer additional skills for the physicians of tomorrow. As a neurologist, I have to understand MRI techniques and outputs in a way that would have been foreign to my predecessors, who relied predominantly on the physical examination. Similarly, the next generation of physicians will need to acquire knowledge in the data sciences as pertains to health care AI.
As Dr. Silva notes, medical training should also incorporate a nuanced approach to clinical decision making based on AI outputs. As Dr. Grenon highlights, it is imperative that physicians contribute to the content of new health policies surrounding AI - as championed by the AMA.
The title of this response borrows from Hubert Dreyfus' 1972 book emphasizing the limits of artificial reason when it comes to human intuition. As patients will always look to their physicians for empathy, healing, and guidance, medical schools should continue to promote the medical humanities and biomedical ethics in their curricula. And while the promise of AI has certainly advanced since Dreyfus' treatise, one certainly holds as relates to health care: the pursuit of medicine should and will remain, intrinsically, a human endeavor.
I had a recent opportunity to speak to the medical school deans and they were all very receptive to incorporating data science into the medical school curriculum. There is even one medical school that will dually train and educate every medical student in data science and/or engineering.
At our medical school, we teach skills such as digital literacy, and knowledge about artificial intelligence, health sensors patients use, interpreting the results of genomic testing and other advanced technologies from 3D printing to augmented reality. We look at the group of these technologies as "augmented intelligence" and students must see real-life examples, look at evidence-based papers describing their use and create scenarios in which they would analyze data from such technologies with their patients.
This requires a completely new approach in medical education. Just like in the way patients and medical professionals become partners (instead of the hierarchy we have today), educators and students should also form such a partnership and learn from each other. The major reason behind that is how much students can learn outside the curriculum because of digital technologies.
In the past, we have introduced and used many therapies and drugs before the advent of evidence based medicine. We should make sure that the introduction of AI into clinical practice is evidence based from the start.
After all, we physicians are responsible for the medical home of patients. Therefore, for a physician, the decision whether a specific AI is safe, efficacious and equitable for your patient, remains with the physician.
You are already seeing many claims about how awesome or terrible specific AIs are, which is confusing.
Medical students and physicians need to learn how to find and read evidence, how to separate the good evidence from the bad and the ugly, how to understand how evidence tells them what the AI can and cannot do, and how to evaluate and interpret the evidence for the safety, efficacy and equity of such AI - if any!
Excellent points in this thread regarding medical student education. As data science matures, we will see more med students with some education from their undergraduate years. Additional physician-specific training (as discussed in this thread) will be necessary during medical school. But, what about specialty specific education during residency? There is clearly opportunity here and radiology has been proactive in this regard.
Radiology is an informatics driven specialty and has created a National Imaging Informatics and Curriculum Course, sponsored by the RSNA and SIIM. The curriculum is directed towards senior residents. The content covers topics such as standards, computers/networking, PACS, data plumbing and, of course, machine learning. The curriculum is worth checking out as a potential model across other medical specialties: sites.google.com/view/imaging-...
And, then there is the challenge with practicing physicians. Many physicians, especially later in their careers, received little “computer training” during their early careers. With the emergence of EHRs, those physicians have had to get up to speed quickly, all while maintaining their already busy practices. AI will add another layer onto that. Directed continuing education on these new technologies will help enable success in patient care, which could be in the form of the resident education I describe above. Many medical sub-specialties are already pursuing this goal through their CME offerings. Success in this regard will help us achieve all four components of the quadruple aim.
On behalf of the AMA, we welcome our expert panelists. I will kick-off the panel with the first question: the AMA's House of Delegates uses the term augmented intelligence (to describe both assistive and autonomous AI). Does characterizing AI in this manner resonate with you?
A great question to start. It will be necessary obviously to clearly state how AMA views and defines the term in setting the tone for the conversation in health care.
The term “augmented intelligence” has a positive connotation and emphasizes a “human” component. This then also sets a direction when thinking of applications in the health care sector, for example in assisting practitioners in delivering better care. Some view AI as a continuum (see references):
-Assisted intelligence — where AI has replaced many of the repetitive and standardized tasks done by humans.
-Augmented intelligence — where humans and machines learn from each other and redefine the breadth and depth of what they do together.
-Autonomous intelligence — where adaptive/continuous systems take over in some cases.
We also need to think about machine learning (i.e. the processing of data to generate insights) in the continuum.
Overall, augmented intelligence resonates positively with me particularly because it emphasizes that component of a human interaction at the center of the task, which is key to our work in health care. I look forward to hearing my colleagues’ thoughts on this.
Ref:
-recode.net/sponsored/11895802/...
-usblogs.pwc.com/emerging-techn...
Thank you to the AMA for inviting us to join today's discussion, and thank you Sylvia for this important existential question regarding the definition of AI. I believe this characterization moves us in the right direction, delineating AI by the degree of autonomy for a given clinical scenario. In other words, assistive AI may refer to algorithms that provide clinical decision support, but still depend on a human to make a reactive decision, versus fully autonomous AI that offers a diagnosis or treatment independent of human input. An autonomy-based taxonomy for AI is critical to inform clinical integration and health policy going forward.
I believe that the term, augmented intelligence, works today as it emphasizes the cooperation between the physician and the technology in improving patient outcomes. This is probably a perfect situation for some tech, but others will be more autonomous, and we should not avoid discussing those as they affect the doctor patient relationship.
It seems that many clinical tests will run autonomously, produce a well regarded and statistically sound output, and then only the review of the result involves the physician, who really cannot know what has occurred in the black box. That basically would be interpreting the result for the patient and using the autonomous result to craft a medical plan, which would still be done with augmented intelligence.
Thank you, AMA, for the opportunity to contribute to this important discussion. Sylvia’s question is a very good one to start the discussion.
I agree with Dr. Grenon’s response, a view which is also supported by the IBM web site blog included below.
The key concept is AI augmenting what physicians do. In other words, the need for AI to coexist with human decision making in a field as complex and heterogeneous as health care. This is the standard and the expectation at the present time.
Dr. Southerland’s points about assistive vs. autonomous AI are equally relevant, as autonomous implies a higher level of diagnostic and therapeutic decision making, greater intelligence if you will. These two terms are not necessarily synonymous. The other term to consider is “automated”, which implies a task performed independent of direct physician input and at a lower level of complexity than autonomous.
This discussion highlights the importance of a consistent taxonomy across AI discussions. For example, consider the definitive taxonomy for physician services, Current Procedural Terminology. The term automated is used in the description of many services. But, autonomous in the context of machine learning / continuously learning systems may be new territory for though leaders to consider, define, and share.
All of our experts raise important points. There are some schools of thought that characterize "augmented intelligence" as part of continuum of assistive - augmented - autonomous as noted by some of our expert panelists. The AMA's House of Delegates adopted an alternative framework where augmented intelligence is inclusive of both assistive systems like clinical decision support and fully autonomous systems that could render a definitive diagnosis, for example. In the latter case, this would aid the physician and patient to make decisions about course of treatment and other relevant interventions--thus still aiding human decision-making and scaling capacity. Dr. John Mattison, another leading expert in health care AI, was interviewed recently in preparation of a forthcoming AMA Board of Trustees report on health care AI that will be considered at the AMA's Annual Meeting in 2019. Dr. Mattison shared: "As we embed more and more machine learning in our clinical decision support and in our clinical workflows (face to face [and] virtual care), we will discover far more interaction and meshing between human and machine, physician and computer. The notion that the machine will acquire absolute superiority over the human in decision-making implies that the output of the machine will be strictly deterministic, as if it were just like the result of a serum sodium level. . . . Incorporating [...] highly variable and contextual human considerations into the treatment plan really requires thoughtful and empathic discussion between doctor and patient."
Yes, Augmented Intelligence resonates because the idea of pitting humans vs. machines is not realistic in healthcare. These are humans who need the caring and humans who give the caring. Thus, human-centric care will need to include humans augmented by machines, as is the case with other tools. AI will be another tool for clinicians, patients, fRamilies (unpaid caregivers who are friends and families), and others in the eco-system to prudently select and use.
Yes, that's very helpful. It's also great to see you the AMA is trying to use the term "explainable intelligence" in its reports and policy papers.
Although, it might make sense to make a distinction between artificial (narrow) intelligence as a technology and augmented intelligence as a cultural change in the job of medical professionals.
The latter form provides more space for discussions about the technology's impact on this profession therefore I'm very grateful to the AMA for making this happen here and I look forward to the conversations.
Hi Sylvia, supporting here the view of Dr. Mattison as I believe that the physician judgment and empathy is a key component of the therapeutic relationship. Furthermore, there are nuances that currently require a human's brain to solve, particularly with regards to compiling multiple factors (patient's beliefs, voice patterns, body language, social determinants of health beyond what is captured by EMR) when discussing best management plan and alternative treatments.
That being said, we may come to a point in the not-so-distant future where technology and AI are able to capture these factors in a way that is as efficient as a highly empathetic and skilled health practitioner for example. Then, what happens? Will the AI be used still simply to augment decision making or will it be functioning more independently to the point where it is granted the equivalent of a license to practice? Considering trends in the consumerization of health care and the growth of start-ups in the mental health space, this is a scenario that needs to be looked into. And specifically assessing the consumer side of the equation, such solutions which may not be acceptable to Baby boomers and Gen X may be in the close horizon of the Gen Y and Z. This is definitely an interesting discussion to have.
As other have also said, within augmented intelligence, the distinction between assistive AI and autonomous AI is critical, as their evaluation of safety, efficacy and equity are different, and therefore their accountability and liability. Assistive AI assists the physician or other provider in making better or more efficient clinical decisions, though ultimately this decisions is with the physician, and any liability is on that physician.
Autonomous AI makes a clinical decision, and not the physician, and thus the accountability and liability lie with the AI 's creator - the developers making it. And thus for example IDx has medical malpractice insurance for its autonomous AIs.
And definitely for autonomous AI, we need rigorous design and validation standards.
Connect
These costs to date seem largely covered by large health care systems, research budgets, and innovators. It seems to me that we do not have a way to properly recognize in fee for service billing/payment the financial cost of developing and implementing these programs. In the short term that will mean these are used for population or community programs that reward those using this technology to improve care and reduce cost. Over time CMS and commercial payers will need to develop new methods that account the real costs and real benefits of this technology.
Pending
Dr. Repka nicely summarizes the challenges we face in establishing means of payment within fee for service systems such as the Medicare Physician Fee Schedule. Similar questions arise in other payment systems such as the Hospital Outpatient Prospective Payment System, the Inpatient Prospective Payment System and the Quality Payment Program. And private payors, including Medicare Advantage, provide additional venues for discussion.
Speaking of the Quality Payment Program (QPP), there are essentially two arms to the QPP: the Merit-Based Incentive Payment System (MIPS) and alternative payment models (APMs). There is the opportunity for AI in both. For example, under MIPS, physicians are rewarded for improving quality, decreasing cost promoting interoperability and encouraging practice improvement activities. Other posts on this topic have highlighted the quadruple aim and MIPS has similar goals. Scoring in the MIPS performance categories is largely based on individual measure performance. Those measures will need to evolve in a way that recognizes and encourages the use of AI but does not overly burden physicians who are in a “watch and wait” mode while the regulatory climate matures.
Connect
I may have a different take on costs particularly as it relates to an improvement in operations and organization outcomes. Implementation of an AI system brings a competitive advantage that sincerely, several businesses are jumping into. So one could look at the acquisition of such systems as a required investment in order to remain competitive particularly as others will equip themselves with these capabilities. From that standpoint, the costs would need to be assumed by the organization itself, much like an upgrade in IT system. The organization needs to decide what is their take on partnering/acquiring/implementing of an AI strategy that is aligned with their mission, strategic plan, with a clear analysis of the return on investment.
As we move into the data age, AI obviously goes hand in hand with data (access vs ownership). Some have even referred to data as the new oil (see reference below). Health care organizations would subsequently benefit from positioning themselves properly with their data and AI strategy, particularly with new players venturing into the system (Big 5- see reference). And since our goal is to make patients' lives better, it should be done in a way that is responsible and fostered around that goal.
economist.com/leaders/2017/05/...
businessinsider.com/alphabet-a...
Connect
There will be a Senate Finance Committee Briefing tomorrow, on "Autonomous AI and Healthcare savings". All are invited. Tuesday May 28th, 215 Dirksen Senate Office Building, 12:15PM - 1PM. There will be three presentation, two by autonomous AI companies 3DDerm (Liz Asai) and IDx (me), and AMA. Lunch will be provided.