Augmented intelligence (AI) in health care is complex and changes in this space are rapidly evolving. This discussion is one of the initial steps in the AMA’s efforts towards educating physicians about the changes AI systems and methods will likely bring to clinical practice and how they can leverage them to support and achieve the quadruple aim of health care. Experts in the field will cover the basics of AI including terminology and trends, share use cases, and address key public policy issues.
The AMA appreciates the participation and expertise of the panelists. We thank you for the contributions that each of you have had made to a rich and wide-ranging discussion. Our last request is that you identify any key issues or topics that we did not cover that should be included in a future health care AI discussion. We also ask that you identify any key topic that we covered that you recommend we cover in a follow-up focused topic health care AI series. Again, thank you for your contributions to improving patient health outcomes and to the medical profession. Sylvia, Kim, and Ashley
Excellent question, Kimberly, and the role of medical specialties and their societies will be critical, Regarding radiology, it makes sense that radiology would be a leader among the specialties as the digital nature of imaging makes it ripe for impact from AI. I have mentioned radiology across several discussion threads this week and thought I would bring all those comments into one response for easier reference.
The American College of Radiology Data Science Institute (DSI) has been very forward thinking in their approach to AI applications in radiology. acrdsi.org/
For instance, to enable use case development, their Technically Oriented Use Cases for AI (Touch-AI) platform has created use cases across multiple anatomic systems, such as abdominal, cardiac, musculoskeletal, and others. The use cases include considerations for data set development and technical specifications to help AI developers. acrdsi.org/DSI-Services/TOUCH-...
Assess-AI is a platform to collect AI related data via a national registry. Importantly, data on effectiveness is reported by radiologists at the point of care, from which the registry allows benchmark data to inform best practices and identify potential outliers. TOUCH-AI allows the capture of relevant metadata associated with individual encounters, such as equipment type or patient demographics. In combination, these two platforms enable post-market surveillance in the wild and potentially inform the FDA post-market surveillance of imaging applications. acrdsi.org/DSI-Services/Assess....
Regarding education, an informatics course has been created called the National Imaging Informatics and Curriculum Course, sponsored by the RSNA and SIIM. The curriculum is directed towards senior residents. The content covers topics such as standards, computers/networking, PACS, data plumbing and, of course, machine learning. sites.google.com/view/imaging-...
An important and useful role is to provide a platform for patients to be able to input their medical data to the health care system through their cell phones. This allows the patient to pre-send medical data which is more detailed and accurate for the provider and saves time for all during the actual visit. We have developed a basic AI SaaS Program for this based on individual anatomic and organ system algorithms.
IMO specialty societies as noted earlier in the discussion have to be able to define in a manner referring physicians and patients can understand when the technologies are suitable and valid. In many cases there may be multiple technologies really not ready for patient care. This will be a difficult job and will also be hard to do in the timeframes that I anticipate will be expected by patients and society and be able to flex and adjust rapidly.
National specialty societies are already taking stock of how to address and prepare for health care AI as mentioned by Drs. Repka and Silva. In addition to the American College of Radiology as highlighted by Dr. Silva, the American Academy of Dermatology has recently issued a policy statement on health care augmented intelligence:
And, the American Association of Family Physicians has partnered with CMS CMMI on the AI Health Outcomes Challenge. The Challenge is billed as: an opportunity for innovators to demonstrate how AI tools – such as deep learning and neural networks – can be used to predict unplanned hospital and skilled nursing facility admissions and adverse events.
And, we expect more highlights this year.
And, the AMA is offering an array of resources and tools for physicians and medical societies and others to learn more about health car AI applications. This includes ongoing policy development (another round of policies will be considered by the AMA's House of Delegates in a couple of weeks), CME on Health Care AI, JAMA Network special collection page on machine learning, ongoing AMWire article series, state and federal advocacy, and the work of the Digital Medicine Payment Advisory Group. This is just the start.
ama-assn.org/amaone/augmented-...
How can physicians get involved in the design, development, evaluation, and dissemination of AI systems and methods to ensure their voice is heard and their input is incorporated into these tools?
First, physicians should support their advocacy bodies such as the AMA and specialty based professional societies to promote policies to protect the integrity of the doctor-patient relationship and ensure smart integration of AI into healthcare systems. Physicians should familiarize themselves with the information technology infrastructure at their hospitals and practices to better understand the current state and prepare for how AI might be considered for future integration. As AI systems become more ubiquitous, physicians should remain up to date on the latest literature and guidelines within their specialties that incorporate augmented intelligence. Lastly, we should all advocate that physicians in training gain exposure to data science in their curricula, understand the verbiage of AI, and ideally be part of the creation of new AI systems to augment practice within their chosen specialty area.
Great question, Kim. Our goal with the AMA Physician Innovation Network (PIN) is to get as much physician involvement and feedback included into new technology solutions and applications as early and often as possible. Step one is being able to easily connect and bridge technology innovators/developers with those on the front lines (physicians, providers) who have problems that need solved... as seamlessly as possible. With 5,000 physicians and innovators now participating on the platform, we look forward to more and more users joining to get involved.
Health care AI holds much promise, but physicians’ perspective needed:
ama-assn.org/practice-manageme...
Dr. Southerland and Meg bring important perspectives. I would add that most of the friction will probably happen in the next decade, as these technologies are making it to the workplace and current generations of physicians have not been trained on them. However, for current and future trainees who grew up with ubiquitous technology and AI, the use of such solutions in healthcare will likely become expected. There will be a place of individual societies in keeping physicians involved and new societies addressing these technologies, e.g. dimesociety.org. Medical schools and residencies will also need to ensure their curriculum involves activities addressing these new technologies (e.g. journal clubs, etc).
Payment of health care costs is subject to different payment models. For example, some are paid on a fee for service basis, on risk-based models, or capitated models. In addition, some costs are direct and others are treated as indirect costs. And, there are different applications of health care AI systems including clinical, health administration, and research, for example. Discussing payment and reimbursement, therefore, depends on a host of relevant facts. Context matters. Currently, clinical applications of new technologies that include algorithms are paid under fee-for-service, capitated models, and risk sharing models either directly or indirectly. To address costs and payment pathways relevant specifically to clinical applications of health care AI systems, the AMA's Digital Medicine Payment Advisory Group (DMPAG) is focused on addressing the particular pathway to payment for certain AI systems with clinical applications in the context of fee for service. The DMPAG also considers the relevance of AI applications that would undergird and drive the success of alternative payment models. What are examples of clinical applications that can be captured in the fee-for-service model and which applications represent value best captured in the context of alternative payment models?
Sylvia’s question and comments on payment methodology are excellent. Rather than propose which applications will fall where in this methodology, I will take this opportunity to provide some background on how FFS payment is determined within Medicare. In a parallel discussion thread, I mentioned the QPP which is also a relevant payment system.
Payment amounts within the Medicare Physician Fee Schedule are based on the Resource-Based Relative Value Scale, which assigns relative value units (RVUs) to individual CPT codes. The Total RVU = Work RVUs + Practice Expense (PE) RVUs + Malpractice RVUs. Work captures physician (or qualified health professional) work, such as the work associated with a clinical visit, procedure or interpretation. PE captures the costs associated with providing a service. Malpractice speaks for itself and is based on specialty-specific data.
Fairly straight forward so far, right? Things get more complex when we delve deeper into PE. PE RVUs include two types of cost: direct and indirect. Direct costs are specific to the service (code) being performed and include staff, supplies, and equipment– for instance, the gloves used during an examination. CMS updates the direct expenses periodically as new codes are created or re-reviewed. Indirect costs are not code specific but are necessary for the service, such as the lights or a receptionist. Indirect expenses are based on data from the Physician Practice Information Survey (PPIS), performed several years ago in the form of specialty-specific PE/Hr. Several variables within the PE methodology are updated periodically, such as budget neutrality adjustments, direct equipment costs per service and direct: indirect ratios.
Continued below...
Now, let’s get back to Sylvia’s comments and question. AI represents a new paradigm for the established methodology. Which is okay – the system was built to adapt to new technologies and has always done so as new technology enters the fee schedule. But several questions remain: what does AI do to physician work? Where does software as a medical device (SaMD) fit into the PE methodology -direct or indirect? Are there malpractice RVU implications? What about the hardware used by the SaMD - is that affected by the FDA 510 (k) application, such as when de novo (no predicate device).
As Co-Chair of the AMA DMPAG, these are questions we are actively pondering. I look forward to discussing these and other questions which may arise in this discussion thread.
For more details on the work of the Digital Medicine Payment Advisory Group:
Here are some of the areas of focus for the advisory group:
--Create and disseminate data supporting the use of digital medicine technologies and services in clinical practice.
--Review existing code sets (with an emphasis on CPT® and HCPCS) and determine the level to which they appropriately capture in current digital medicine services and technologies.
--Assess and provide clinical guidance on factors that impact the fair and accurate valuation for services delivered via digital medicine.
--Provide education and clinical expertise to decision makers to ensure widespread coverage of digital medicine (e.g., telemedicine and remote patient monitoring), including greater transparency of services covered by payers and advocacy for enforcement of parity coverage laws.
--Review program integrity issues including, but not limited to, appropriate code use, and other perceived risks unique to digital medicine. Develop guidance and clarity on issues to diverse stakeholder groups.
Autonomous AI has tremendous potential for cost savings and improved access, if we do it right - and if we are transparent and accountable for the safety, efficacy and equity of AI.
But the lack of clear payment models is currently holding back professional investments in AI R&D caused by the twin risks of the FDA validation process as well as uncertainty about payments.
Payments in my opinion should be directed to those forms of (autonomous) AI that save cost (efficacy), improve patient outcomes and access. Autonomous AI has the potential for enormous cost savings, but many of us in this field are concerned about "glamour AI", where we would pay globs of money for AI that is technologically exciting and "cool" but which does not improve patient outcomes and otherwise does not advance the quadruple aim.
Payment for autonomous AI should in this framework be tied to rigorous clinical, workflow and human factors validation, including machine learning design and training review, combined with preregistered clinical studies tied to patient outcome. All these elements, in my opinion, should be accounted for before payment can be considered. Also, it requires accountability from the AI developer, so that the medical liability lies primarily with the autonomous AI and not with the provider using it - who should not be accountable after all for a decision he/she did not make. Most importantly, in my view, it requires evidence of clinical usefulness, as measured by clinical adoption, and careful consideration of deployment by providers.
Considerations on payment such as these above can be applied to both fee-for-service and population-health type payment models: the key is making sure that payment goes to those AIs that improve our patients' outcomes.
Finally, I want to make sure everyone is aware of my conflict of interest in this matter, as founder and CEO of an autonomous AI company.
Can AI support access for vulnerable populations to improve outcomes and reduce costs? Can AI help address social determinants of health (SDOH)?
If we choose to talk about AI in health care, we need to talk to the same level about AI and social determinants of health (SDOH; various names in other fields). Several of the major concerns physicians have about their patients is whether they can travel to their appointments, pay for their medications, pay for a healthier diet, have someone help them to ensure they are adherent to their treatment plans or have access to behavioral health services. These are all variables deeply rooted in social factors, i.e. the importance of the "zip code" (hsph.harvard.edu/news/features...).
If we want AI to benefit our populations in a fair way and improve access to both care and AI itself, there is a strong imperative to support all issues of SDOH. And if we improve the social condition of our patients, we may actually be able to address disease where it needs its first attention.
New companies are already addressing social determinants of health, with various degrees of reliance on machine learning/AI (see list below). Hopefully, more and more health care systems or payers will be able to benefit from their offerings or that of new players focused on improving the social fabric. In no particular order:
-Unite Us (uniteus.com)
-Nowpow (nowpow.com)
-Healthify (healthify.us)
-TavHealth (tavhealth.com)
-InquisitHealth (inquisithealth.com)
Why is transparency important for augmented intelligence/AI? What does that mean (practically) for developers, regulators, health systems, physicians, researchers, and patients?
AI has enormous potential for lowering cost, improving access and improving quality. But many have justified - and some unjustified - concerns about AI in healthcare such as patient safety, AI bias, job loss, ethics and loss of privacy.
Transparency and accountability about risks and benefits healthcare AI are crucial, as we have all seen how a lack of transparency and unethical behavior can lead to backlash in an emerging field. After a death from an autonomous car last year, most car companies slowed down their investment in autonomous driving R&D. The Theranos disaster has set back lab innovation for years.
We need to be honest about the gains and risks of healthcare AI from the start. And transparency about safety, efficacy and equity form a solid framework.
AMA commissioned a study to learn more about physician views on new digital medicine modalities (like telehealth and remote patient monitoring). The surveyed physicians were generally enthusiastic about adoption, but one of the key questions they needed addressed (besides the threshold question of clinical effectiveness) was medical liability. Essentially, will I (the physician) incur increased liability risk for using this new technology? Developers are advocating for streamlined review of AI systems or, in some cases, no regulatory review in the context of clinical decision support or health and wellness applications. Under these conditions, developers must be assigned the appropriate level of responsibility and accountability which includes liability. What are your views on the appropriate approach to align incentives so that those with the knowledge of risk and best positioned to mitigate it are properly incentivized to do so?
Physicians are fortunate that the AMA, through its policies and advocacy, is maintaining vigilance in the role that medico-legal risk plays as AI is incorporated into healthcare. As AI sophistication grows and the continuum of device autonomy shifts to the right (with almost completely autonomous systems representing the extreme), implicating the treating physician from a liability perspective may be seemingly even more remote, and assigning liability to a broad spectrum of designers and manufactures seems more likely. Yet, physicians remain in a position of at least ordering and applying these innovative devices and algorithms, and it’s likely that the pool of eligible defendants in a tort scenario will increase and expand rather than shift from one to another. Furthermore, because of the increasing accuracy of AI in certain diagnostic scenarios, risk in not employing AI in a missed diagnosis could eventually incur as much liability as errant diagnostic activity associated with a faulty AI decision. The AMA has been an astute voice on behalf of physicians in monitoring this activity, being sensitive to the premature creation of an inappropriate, new standard of care, and encouraging developers and designers to assume risk when appropriate. As with many healthcare activities, risk can be minimized through transparency and communication, and just as important as physician insight and understanding of the functionality of the AI algorithm, so to is the sharing of this insight with the patient so that the patient, as part of the healthcare team, appreciates the role that AI is playing in care.
Liability and accountability should be assigned to the party in the best position to assess and mitigate risks associated with the intended use.
Thus, creators/developers of 'Autonomous AI' should assume liability for patient harm arising from the output of the AI because, by definition, the output and clinical decisions of these autonomous AI systems can be relied on without physician/human interpretation.
Contrary, with "Assistive AI", liability for patient harm is with the provider deciding to use the Assistive AI, as there the provider, not the AI is ultimately responsible for the clinical decision.
Shifting medical liability from the provider to the autonomous AI (creator) is a feature of Autonomous AI that can lead to significant healthcare cost savings and that addresses one of the Quadruple Aims - improving the worklife of clinicians. Creators of Autonomous AI need to be incentivized to assume appropriate liability by tying it to predictable payment models.
I support the fact that AI developers and companies will need to hold responsibility for their product, reminiscent of medical devices. However, this is a complex issue and part of it is cultural particularly when looking at the risk of liability. We need to think about concepts of negligence, negligence leading to injuries and damages incurred. Beyond that, we also need to think about what is "standard of care".
Currently, AI (specifying here "AI as a therapeutic") is not standard of care but as AI makes it to peer-reviewed papers, society guidelines, textbooks and Board examinations, there may be expectations that physicians have to use such technologies. With that in mind, in the case of liability, the question may become related to where lies the negligence or fault: AI technology itself or physicians? And is it overall a trend we want to keep following in this country? What is the patient started using the AI system on their own (with direct-to-consumer applications and delayed diagnosis/management)- is the fault altered, decreasing liability for physicians?
I would also bring that new technologies shift the paradigm of medicine away from traditional science and teachings. For example, the heavy reliance on physical exam is not a feature that is as easy in a telemedicine consultation. So as AI and other new technologies make it to the practice of healthcare, we will need to ensure that the training and continuing medical education reflect this change. Similarly, the legal system will have to reflect these realities.
Excellent points, and such an important discussion. Let’s think about the liability question from a practical, real-world clinical perspective.
Presently, we use medical devices all the time in practice. And adverse events during the use of these devices do occur. When the device is low risk (Class I), such as a stethoscope, the responsibility for a misdiagnosis generally falls on the physician. It is hard to blame the stethoscope for missing an abnormal heart rhythm. When the devices are high risk (Class III), the liability for a poor outcome may fall more towards the device manufacturer/developer when the poor outcome is device related. For instance, failure of a cardiac implantable device. But, where the error is technical, such as shortcomings during placement, or misinterpretation of output data, that liability may fall more towards the physician.
But, what about devices in the middle (Class II), which is where clinical decision support (CDS) and diagnostic support software reside? For traditional, CDS interfaces, the liability trends more towards the physician since they are expected to independently validate the diagnostic recommendation based on the patient in front of them. In other words, they evaluate if the recommendation is appropriate and, if so, follow that recommendation. And if not, adjust. This is not new territory – take CDS for mammography – CDS is quite common here and mammographers have learned to integrate those findings into their workflow and decision making.
So, what about autonomous AI enabled devices? Continued below
As Dr. Abramoff points out, autonomous systems take us into new territory. Diagnostic and therapeutic decisions are being made, by definition, without physician input. Which seemingly shifts the liability back towards the developer? Granted, these devices may reside in a physician’s office and the patients affected will likely see a physician. But the core functions are outside the physician’s immediate control. Again, this is the nature of autonomous systems. [Side note: for those interested, the IMDRF risk stratification is worth reviewing: Section 7.2 in the following document. This is an important component of the FDA's recently proposed updated regulatory framework for continuously learning systems: imdrf.org/docs/imdrf/final/tec...]
So, back to practical questions. If I am a physician pondering an autonomous system in my office, how do I explain this evolving liability paradigm to my in-house compliance team? Or patients? Is consent necessary? And what are the next steps when an adverse outcome occurs or I see a trend in that direction? And I am sure malpractice attorneys (on both sides) are asking these same questions.
Answering them will help ensure these critical technologies reach the patients who will benefit from them.
The questions of liability and explainability are deeply intertwined in the field of AI. Doctors are hard-pressed to rely on machine recommendations that contradict their own medical judgment without a clear explanation. Doctors and hospitals cannot rely on algorithms that may be discriminating against classes of patients through hidden variables or biased training data. A doctor whose software recommends a counterintuitive choice will need more than a reason from the software – it must be a reason that humans can understand. This is no small feat. DARPA recently announced a $2 billion investment toward the next generation of AI technology with “explainability and common sense reasoning.” If AI is going to augment rather than replace human decision-making, as many hope, then explainability is key. But therein lies the rub: the best AI will not just be faster or cheaper than human decision-makers, but reach better conclusions by seeing things people cannot. Sometimes, the more effective the AI, the harder it will be to explain its decisions in terms humans can understand. Indeed, many AI experts posit an inherent technical tradeoff between accuracy and explanation. Questions of liability will turn in part on the quantifying and calibrating the acceptable trade-off between accuracy and explainability for a given scenario, which will in turn help answer the question of what a doctor or other human operator should do when she doubts a machine recommendation.
We touched previously on the importance of demonstrating evidence of benefits for AI systems and applications. For the innovators out there who are trying to create systems that are relevant and aligned with current health care needs, what would be the suggested playbook? When should they consider collaborating with academic institutions, clinics or hospitals? With often limited resources, should they go for an increased number of peer-reviewed publications or simply more direct experience with consumers? Which regulatory body, agency or stakeholder if any should they consider engaging to get the highest amount of validation? FDA is obviously a place to start ((regulations.gov/document?D=FDA...) but what else should they consider?
This is an essential question. As innovators should consider what physicians and practices must do in order to integrate new solutions into clinical practice. AMA has developed a Digital Health Implementation Playbook that assists practices address key questions. ama-assn.org/system/files/2018...
A similar set of questions should be addressed by innovators as part of ideation. They should have a strategic plan that addresses 1) innovation (building the evidence base of clinical efficacy as well as safety; 2) navigating regulatory requirements and quality assurance as well as applicable state requirements (FDA, FTC, FSMB/State Boards); 3) Payment (coding, valuation, coverage); 4) liability (which comes in different flavors including product, tort, program integrity, and HIPPA/state privacy laws); 5) existing infrastructure to support technology (interoperability, connectivity (last mile issues), common data models); 6) deployment conditions (training, professional development, change culture and tools). Ideation is not sufficient, but a solid grasp of these topics and a thoughtful game plan for navigating are important. Working with academic centers and physician practices helps. (This Physician Innovation Network, for example, is designed to support such interactions.) In addition, national medical specialties also can be important communities to engage and understand. National specialties are doing incredible work in this area already including the American College of Radiology Data Science Institute and the American Association of Family Physicians joint CMS CMMI AI challenge.
In a recent episode of the excellent Atlantic Monthly podcast series - Crazy Genius: ““I think privacy is the wrong way to describe the issue we face in a world of pervasive unregulated data collection,” says Julia Angwin, a longtime investigative reporter. She prefers another term: data pollution.
“I’ve long felt that the issue we call privacy is very similar to the issue we call environmentalism,” she says. “It’s pervasive. It’s invisible. Attribution is hard. Even if you get cancer, you don’t know if it’s from that chemical plant down the road. Living in a world where all of your data is collected and swept up in these dragnets all the time and will be used against you in a way that you will probably never be able to trace and you will never know about it feels like that same type of collective harm.”” Source: theatlantic.com/ideas/archive/...
Agreed, in healthcare privacy is an issue when deidentified data is linked with existing free/purchasable datasets to reidentify individuals.
In addition, at the aggregate level, do we also have data pollution. Collective harm? Implications and opportunities in healthcare AI?
Nowadays, everyone seems to be building artificial intelligence-based software, also in healthcare, but no one talks about one of the most important aspects of the work: data annotation and the people who are undertaking this time-consuming, rather monotonous task without the flare that usually encircles A.I. Without their dedicated work, it is impossible to develop algorithms. They might be the unnsung heroes of augmented intelligence.
Have you seen good exampes for incentives, policies or platforms which facilitated the job of physician data annotators?
Compared to other fields of AI, AI in healthcare will always be data challenged, because acquisition of patient data is often risky for the patient (for example, radiation), expensive, and ethically tricky. In addition, the 'truth' or 'annotation' or 'reference standard' can be expensive or risky (such as a biopsy, or agreement between independent experts), or have unacceptable latency (patient outcome in slowly progressive diseases).
But if we take that into account, the principles of safety, efficacy and equity for (autonomous) AI show that the patient, and therefore patient outcome, are primary.
For example, for diagnostic AI, it is more important that annotation represents clinical outcome or a proxy thereof, as much as possible, rather than what clinicians think.
Thus in many cases, and especially in emergent and acute conditions, we can depend on outcome as the reference standard, but in more slowly progressive diseases, or where a control group is ethically unacceptable, we will have to depend on the best predictive proxy for outcome. In many fields, so called reading centers have been organized to represent a consistent high quality reference standard.
Connect
My thanks for the opportunity to contribute to these discussions. I am especially proud of the AMA, including the DMPAG, for representing physicians in the important discussions surrounding AI.
As Co-Chair, of the DMPAG, rest assured that all of the points expressed herein were heard and will be part of our ongoing discussions.
Regarding topics which may be worthy of a more focused health care AI series, these seemed to generate considerable nterest over the past week:
- Payment paradigms for AI technology
- Regulatory oversight of AI-enabled CDS and DxSS
- Ethical considerations under AI
- Privacy/security/patient protections under AI
- Tools physicians can use to integrate AI into their practices
Lastly, thank you to the fellow experts who contributed to this topic. It was an honor to learn from each of you.