The AMA House of Delegates passed policy recommendations for AI in health care, setting the stage to ensure the physician voice is leading the way. Join the discussion to learn more about the new AMA policy, how AI can help the industry achieve Quadruple Aim, and further explore the ethical and legal considerations when implementing AI solutions.
What needs to be considered in the development of health care AI as outlined in AMA policy?
There are a range of outstanding policy questions regarding the development of AI in healthcare- AMA policy begins to address many of these, but most issues will remain evolving conversations as both the technology improves and the industry's moral compass is challenged over time. Here are five examples mentioned in the AMA's AI report where nuanced policy can play a supportive and guiding role in the US:
- Verification. Research into methods of guaranteeing that the AI systems meet established specifications.
- Validation. Research into ensuring that the specifications, even if met, do not result in unwanted behaviors and consequences.
- Security. Research on how to build systems that are increasingly difficult to tamper with – internally or externally.
- Control. Research to ensure that AI systems can be interrupted (even with other AIs) if and when something goes wrong, and restore normal function.
And lastly, Explainability, or interpretability, especially as it pertains to the black box of algorithms for stakeholders to understand how the output was determined.
The Council on Legislation has worked with the Board of Trustees and other AMA Councils to draft policy (ama-assn.org/amaone/augmented-...) which allows the AMA to be “at the table” for ongoing discussions with private and public sector entities concerning the progressive integration of Augmented Intelligence (AI) into the health care system. There are some features of AI that are analogous to self-driving motor vehicles, putting patients at-risk for bad outcomes, and physicians at-risk for medical liability crises. It is possible that AI could suck billions of dollars out of health care budgets at a time when way too many people lack access to good traditional preventive and therapeutic medical care. So, AI can be a slippery slope for the AMA.
The train has already left the AI station – ophthalmologists are using AI to improve the early identification of patients with diabetic retinopathy and health care systems are utilizing AI principles to reduce sepsis in patients coming into Emergency Departments. It is important for the AMA to assist physicians in understanding the applications of AI to the day-to-day practice of medicine, and protect physicians from the pitfalls inherent when they work in a health care system that is driven by technological innovation.
Please advise the AMA if you are experiencing problems in your daily work that are directly related to the implementation of AI. The AMA is also looking for physician experts in this rapidly developing new frontier for most of us. You may contact me as the Chair of the Council on Legislation with any of your concerns.
Members of the AMA, national
medical specialty societies, and state medical associations have extensive expertise in patient care and insight that is fundamental to practice advances. It is essential to recognize the physician’s central role as healthcare AI is developed and implemented.
AMA policy therefore is to identify areas of medical practice where AI systems would advance the quadruple aim, leverage existing expertise to ensure clinical validation and clinical assessment of clinical
applications of AI systems by medical experts, outline new professional roles and capacities required to aid and guide health care AI
systems, and develop practice guidelines for clinical applications of AI systems.
It is likewise imperative to ensure that the perspectives of physicians are heard as federal and state policy is developed. Therefore, there should be federal and state interagency collaboration with participation of the physician community and other stakeholders in order to advance the broader infrastructural capabilities and requirements necessary for AI solutions in health care to be sufficiently inclusive to benefit all patients, physicians, and other health care stakeholders.
AI has the potential create an evolutionary shift in healthcare. That being said, AI in healthcare is still in its infancy and there are a lot of safety, regulatory and ethical questions that need to be addressed as the area evolves.
I believe physician organizations have a role to create policies and principles to ensure the safety of patients and an educational role to physicians, so they are aware of the advances of AI in healthcare. This can be achieved by developing rigorous standards and guidelines, advocating regulating bodies for adoption of the standards and guidelines and work with industry/developers for adoption of the principles.
The use of AI will require a paradigm shift for physicians. They will need to adjust their receptivity to machine-recommended learning or clinical actions. While practicing physicians will receiving on the job training of the practical uses of AI, a parallel process of education will also need to be made early in medical education. The practical uses of AI may not necessarily include educating physicians and medical students how the technology works or how to evaluate its applicability, appropriateness and effectiveness with respect to patient care. This is a role that organized medicine can take for education of physicians. We can help to explain in broad terms what machine learning/AI is and how to ask the right questions about AI technology to critically look at its appropriateness in the clinical setting.
Organized medicine also has to look at the ethics of AI in medicine. Industry’s primary focus is to innovate and may not see the ethical dilemmas that may be encountered in the clinical setting. Medical societies can play a big role in safeguarding against bias, avoiding introducing or exacerbating health care disparities advocate for equitable use and access to healthcare AI and protecting individual privacy interests and preserving the security and integrity of personal information.
Excellent points! Among your other superb messages, the role of organized medicine is foundational to safeguard the ethics and professionalism of medical practice as AI is integrated. We must be champions for patient privacy and withstand forces that have less than beneficent goals in data accrual.
How will the AMA ensure that physicians are involved in the development, evaluation, and implementation of AI systems?
Multiple AMA Councils and the Board of Trustees are involved with the development with policy concerning Augmented Intelligence. It is important for AMA members to know that they can contact key AMA staff, Council members, and members of the Board of Trustees to express their thoughts about Augmented Intelligence. dtayloe@goldsboropeds.com
Change in healthcare is dramatically slower than what other industries seem to enjoy. For some applications this is appropriate, building necessary guardrails to protect patients from harm. However, when changes are implemented, there's still no guarantee that the new practice or technology will be wholly beneficial to patients and providers.
For example, from the AMA’s 2016 digital health study, 75% of physicians reported that EHRs increased practice costs and many reported reduced productivity. As discouraging as some of the reporting around EHRs have been (see: Fortune’s long-form piece on medical records), 85% of all physicians nonetheless believe digital health solutions give them an advantage to care for patients. Furthermore, 89% of physicians want to be either consulted on or responsible for the adoption of digital health solutions into their practices, as opposed to just being passively informed. This goes to show that physicians have faith in what new technologies can bring, and actively desire to be involved in the process.
The industry of AI needs to look to physicians for guidance in all stages of the process. Dr Robert Pearl, when he was CEO of The Permanente Medical Group, penned an article on the barriers to adopting new technologies, mentioning that many solutions don’t address the "real" problem, they slow down doctors, feel impersonal, and don't seem worth the cost.
By including physicians from the earliest stages of the design process, innovators can build systems that better define and address the “real” problems, fit efficiently within the doctor’s workflow, build trustworthy processes for using and sharing data, and subsequently demonstrate a stronger value proposition for other physicians to invest in. The AMA can play a critical role in bridging the gap, thus accelerating long-overdue reforms and improvements to clinical practice
The AMA has worked hard to reach out to different organizations and companies developing and using AI to serve a resource not only in terms of principles for developing and implementing AI but also as a conduit to its vast physician network.
Unfortunately, many AI systems for healthcare are created in a vacuum without the input of a practicing physician. While the basis of the idea may be ground-breaking, the ability for it to be accepted in practice may be challenging. By having the input of a practicing physician, developers can have insight how it can not only improve care for patients, but design a system that can work within the workflow of medical practice. As they market the AI system, they will have a a physician's perspective of explainability and trust of the system which can help with confidence in the product and adoption.
The AMA has created an Innovation ecosystem which includes Health2047 and MATTER, has been helping to ensure innovations in medicine are evidence-based, validated , actionable and strengthen the patient-doctor relationship.
What are the different types of applications of AI systems in health care? Is this addressed in the AMA's reports and policy? Are there certain applications that will impact the physician’s role more so than others? And, how is this addressed in the reports and policy?
This is an excellent question. An accepted set of definitions, or taxonomy, is essential for meaningful discussion about AI. Too often stakeholders mention different applications and uses of AI systems without clarifying which type of application. For example, some AI systems are used for research, business operations, population health, patient support tools, and clinical care including screening, diagnosis and treatment. Obviously, these all pose distinctly different risks to patients’ health.
In addition, the data used to teach machine learning systems in particular have to be carefully considered for appropriateness and types of bias without regard to application. Clearly the immediate risk of harm for some clinical applications warrant heightened vigilance.
The AMA Council on LRPD primer provides a useful glossary including Machine Learning,
Deep Learning, Cognitive Computing, and Natural Language Processing.
Reports from the AMA Board, adopted as policy in 2018 and 2019, and from the Council on Medical Education adopted as policy in 2019, address how AI systems will and should impact health care including physicians and medical students. The policy provisions address varied applications and identify that the intended use (application) of the AI system is one of the factors that must be evaluated to assess risk.
A foundational principle for AMA policies is that AI systems should enhance the patient experience of care and outcomes, improve population health, reduce overall costs for the health care system while increasing value, and support the health care system while increasing value, and support the professional satisfaction of physicians and the health care team. This pertains regardless of AI application. This speaks to the need, for example, of very creative and new ways to handle healthcare administration, care coordination, and population health. To read more including the reports and policy please see here: augmented-intelligence-ai
This question can be exciting purely because of how imaginative some of the answer can be at times- but the reality is that there will be so many applications to this technology that we just cannot conceive of at this time, in the same way that mobile telephone developers never could have predicted how modern cell phones would be used to check home security cameras, pay bills, watch movies, and serve as a walking encyclopedia of knowledge.
To help in this exercise though, it helps to approach the technology by using the definitions or categorizations that Dr. Heine appropriately described above. For example, AI can be divided two categories. One would be machine learning, which looks through structured data such as labs, medications, demographics, and other numerical or categorical values. In healthcare, ML techniques on this data could assist in determining aggregate risks for disease incidence and outcomes, and then personalize those risk assessments for individual patients.
A second category would be natural language processing, which can study unstructured data such as the dozens of clinical notes written every day for each hospital patient, or imaging produced from xrays, CT scans, and more. By using NLP to convert this information into new categories of machine-readable structured data, we will find ourselves with augmented diagnostic capabilities from imaging sources, programs that draft initial discharge summaries on their own, and perhaps even tools that scan the entirety of medical literature to provide each doctor with the most up to date summaries of medical knowledge on their phones. The latter tools for documentation might give physicians many hours back in the week for more patient care!
These solutions don't take as much imagination to think of as they will need technical expertise to build, but it is a helpful starting point to consider the various self-acclaimed Star-Trek-esqe tricorders that will be hitting the news over the coming years
Our Report on AI provides another framework with which we can explore possible applications! As quoted below:
Broadly speaking, AI systems can be used in many areas of health care, including, but not limited to:
(1) research;
(2) education and workforce professional development;
(3) finance, business processes, and health administration;
(4) tools and services that improve medical practice, e.g., cybersecurity;
(5) population health and public health;
(6) patient and caregiver engagement and prevention; and
(7) clinical care, e.g., clinical decision support or autonomous diagnostic system.
Furthermore, when used in the foregoing areas, AI systems can function to automate repetitive and time-intensive tasks, improve communication and interactions, and enhance decision-making which improve efficiency and accuracy.
Do the AMA’s Board reports and policies consider bias, equity, and the risks associated with AI systems? Could you elaborate?
In its report from the 2018 Annual Meeting of the House of Delegates, the Board of Trustees explicitly states the AMA strives to "Promote development of thoughtfully designed,
high-quality, clinically validated health care AI...identifies and takes steps to address bias and avoids introducing or exacerbating health care disparities including when testing or deploying new AI tools on vulnerable populations." As one can imagine, an AI system has the potential to encode implicit biases within its algorithm as it continues to "learn". Left unchecked, these biases may lead to an exacerbation of disparities -- which is why it's crucial to maintain the physician's role as the steward of patient care. A great example of this is found in the latest issue of the Journal of Ethics (AMA J Ethics. 2019;21(2):E167-179), where Chen et al showed that application of machine learning algorithms in psychiatric and ICU care may NOT provide equally accurate predictions of outcomes across race, gender, or socioeconomic status. Even more broadly speaking, the risks go beyond bias, but encompass a broader ethical framework that includes privacy/confidentiality, patient autonomy, and informed consent.
The Board report with policy adopted in 2018, notes the need to guard against bias. The report recognizes that data can be incomplete and include erroneous information, and is generally biased in some manner. It further states it “is imperative to disclose and provide means to address AI system bias in order to
avoid, among other unintended outcomes, exacerbating health disparities and other inequities.” The report highlights that AI systems that are not properly designed, developed, validated and deployed have significant associated risks.
The Board report with policy adopted in 2019 further expounds on risk and bias. It recognizes the particular concern with Machine Learning (ML). “AI systems utilizing ML present pronounced risk of bias. Physicians, health systems, developers, or regulators may not be in a position to understand the risks due to black-box systems due to design or for proprietary reasons.”
A concern with bias, equity, and risk permeate the report. Two of the policy provisions that exemplify this focus include the following:
“Oversight and regulation of health care AI systems must be based on risk of harm and benefit accounting for a host of factors, including but not limited to: intended and reasonably expected use(s); evidence of safety, efficacy, and equity including addressing bias; AI system methods; level of automation; transparency; and, conditions of deployment.”
“Payment and coverage for all health care AI systems must be conditioned on complying with all appropriate federal and state laws and regulations, including, but not limited to those governing patient safety, efficacy, equity, truthful claims, privacy, and security as well as state medical practice and licensure laws.”
What are some of the key policies that the House of Delegates adopted that you would like to highlight? Are there additional priorities that need to be explored or where additional policy may be needed?
Great question, two policies adopted by the House of Delegates stand out to me, which I hope will will continue to receive further attention-
The first advocates that "Payment and coverage for health care AI systems intended for clinical care must be conditioned on (a) clinical validation; (b) alignment with clinical decision-making that is familiar to physicians; and (c) high quality clinical evidence."
There has already been some discussion in the AMA's Board Report on AI and the PIN regarding the challenges to validation that are unique to machine learning based solutions. The developers of IDx-DR are quoted in emphasizing the importance of having "minimum requirements for AI system validation, including human factors validation; requirements for addressing age, racial, and ethnic bias in the design; and validation of the AI system" but this is only the start. Some tools exist to assist in evaluating the fairness and bias of ML models, such as IBM's AI Fairness 360 and Google's What-If tool, but as our understanding of bias and best practices to identify and manage "high quality" evidence evolves, so too must our future iterations of policy. The AMA must work with stakeholders to guide physicians, researchers, and entrepreneurs to interrogate our perceived best practices and build an accessible path forward that can facilitate the translation of clinical data science to real world improvements in clinical care.
The second policy in mind advocates that "Payment and coverage for health care AI systems must (a) be informed by real world workflow and human-centered design principles; (b) enable physicians to prepare for and transition to new care delivery models; (c) support effective communication and engagement between patients, physicians, and the health care team; (d) seamlessly integrate clinical, administrative, and population health management functions into workflow; and (e) seek end-user feedback to support iterative product improvement."
This statement is so important for developers to see and understand because we need to have AI tools designed in a manner that appreciates the entire spectrum of clinical care. This is in contrast to EHRs, which specifically supported billing practices and where patient care was an afterthought. For example, when examining an AI tool for diagnosing a disease, I would want to ask: Is the tool easy to learn and use? Can it read information from my existing systems, and then write its impressions back into my system so that I can stay in one environment? To use it, will I need to add more steps to my daily routine in a manner that will distract me from my patients? If I have issues, can I trust that my concerns will reach the appropriate desk in a timely fashion so that patient care is not interrupted? These are just a few of the surface-level questions that emphasize how coverage of tools needs to be predicated on whether the tools support not just one aspect of clinical care, but demonstrates an awareness of the full care workflow.
The AMA adopted a key policy that AI should be designed to enhance human intelligence and the patient-physician relationship rather than replace it.
Ideally, AI will be able to take over many of the mundane or low value tasks that physicians need to perform including clinical documentation, billing and scheduling. I’m sure many physicians (myself included) would welcome relinquishing these responsibilities to another entity. This would allow for physicians to spend more time with their patients—the reason we went into medicine in the first place.
The use of AI can also augment the abilities of physicians in its role as a clinical decision support tool and in diagnosis. While AI may have the ability to have “more accurate diagnosis” in the future, the ability to understand how it fit into a patient’s medical condition and life cannot easily be digested by an algorithm. We as physicians will have the opportunity to determine how the information given to us from the “AI consult” fits into big picture. The patient physician relationship will be enhanced by allowing for more time to be spent with our patients to have a meaningful relationship with our patients provide context of the diagnosis and empathy for our patients.
Thanks to everyone for joining the discussion. What prompted the AMA to begin developing policy on health care AI? There is much activity at the federal level. Are federal agencies or the Administration involved in considering AI policies, programs, or initiatives? Has Congress been involved in advancing policy considerations related to AI? Finally, are other interested parties or stakeholders developing policy positions on AI?
When AMA Councils performed environmental scans in 2016, we noted that the Administration had convened roundtables on AI with a broad number of stakeholders but very few physicians although they considered healthcare applications. The current Administration issued two executive orders on the topic and involved significant federal agency activity. This provided further impetus for the Council on Legislation, other leadership Councils, the Board of Trustees, and the House of Delegates to engage and ensure that the physician perspective was part of this policy discussion.
There is now an AI caucus in the U.S. Senate and another in the U.S. House of Representatives, along with ongoing federal agency action, and consideration of AI by the Federation of State Medical Boards.
AMA physician leaders have considered the looming challenges this nation faces with our aging population, healthcare workforce shortages, and how the low birth rates will reduce the amount of resources available to fund healthcare services. AI may be an avenue for new delivery models to leverage technological innovations with the potential to expand healthcare capacity, promote prevention models, and reduce costs.
The AMA has been very forward thinking in ensuring that the health care ecosystem promotes development of safe, responsible, and equitable AI. As an AI practitioner within an organization making significant investment in next generation technologies, the AMA's efforts remind me of the famous saying "The future is already here - it's just not evenly distributed." The AMA correctly identified that development of AI technologies was occurring in isolated pockets of healthcare. Without a concerted effort to establish policies and practices that promote the diffusion of AI technologies across physician practices, many Americans will not benefit from the technologies.
I really appreciate the quote by William Gibson of "The future is already here - it's just not evenly distributed." I think this is a spot-on starting point for this conversation.
As Dr. Heine mentioned, the US Government has started building a national AI strategy, but stakeholders in the US healthcare ecosystem have long been planning for this technology. We have seen groups like pharmaceutical companies rushing to collect patient data (Ex: Roche's $2b purchase of Flatiron Health) in order to drive new product development and revenue streams. However, regulatory agencies have not yet instituted specific policies around data privacy that take these advanced algorithmic processes into consideration. Similarly, we see Apple, Google, Amazon (as part of Haven healthcare), and others driving the healthcare data race forward, often in an unchecked fashion.
Instead of waiting for the difficult problems to surface and potentially harm large patient populations, the AMA had the foresight to invest in capacity-building on AI, facilitate an informed discussion on policymaking around AI, and aggressively engage the medical student and physician workforce to begin serious considerations of the potential consequences. This will lead to more physician leaders elevating the integral voices of patients and providers in anticipating and addressing challenges as the technology proliferates.
Thank you for joining the discussion!
Second question of the day for our panelists: How does the AMA define AI? Are there any AI systems that warrant particular focus or consideration (such as machine learning) from a regulatory (safety, efficacy, equity) or payment perspective?
The AMA defines AI as Augmented Intelligence, that focuses on AI's assistive role, emphasizing the fact that its design, computational methods, and systems enhance human intelligence and clinical decision-making rather than replace it.
Conditions of Machine Learning deployment will require continued attention to assess safety, efficacy,
and fairness. It is essential to know whether the healthcare AI learner algorithm is eventually locked or whether the learner algorithm continues to learn once deployed into clinical practice. This impacts the reliability and explainability of the output.
A prerequisite to payment for AI systems involves identifying, at minimum, the intended use of the
AI system, whether it is assistive or fully autonomous, conditions required for successful deployment, and the level of regulatory oversight required to ensure patient safety and the clinical efficacy of the system. These factors, along with associated liability risk, impact costs and
sustainability.
It is imperative to ensure equity and guard against AI systems’ potential to, in effect, “normalize” biases of their training sets and exacerbate healthcare disparities.
Completely agree with Marilyn re: AMA definition. Many aspects of health care delivery will remain a human endeavor and it's important to keep in mind the goal of augmenting front line staff. This intentional framing can be seen as in contrast to prior efforts to introduce technologies in clinical work flows.
There are many aspects of machine learning that require careful consideration. The FDA is developing a new regulatory framework for Software as Medical Device: fda.gov/medical-devices/digita.... Safety and efficacy need to be demonstrated in clinical care. Unfortunately, many models and performance results are reported as "in silico" experiments on entirely retrospective data. Health care data and care delivery practices are dynamic and there needs to be reassurance that models continue to perform well when integrated in production environments. Similarly, there will need to be close post-market surveillance of machine learning models used in clinical care, because the models will continue to be sensitive to changes in data and practice.
As noted in BOT Report 41-A-18, “combining machine learning software with the best human clinician ‘hardware’ will permit delivery of care that outperforms what either can do alone.” Other physicians have noted that “the applications of AI too ‘augment’ physicians is more realistic and broader reaching than those that portend to replace existing health care services.” Other early adopters of such systems note that “[t]he difference between artificial intelligence and augmented intelligence may seem inconsequential to some; it could quite literally make a world of difference when it comes to how we approach robotics in the decades to come ... [and] [i]t’s businesses using the technology to supplement rather than replace their employees that stand to benefit most from the further development and refinement of this technology.” In sum, whether AI systems are assistive (such as clinical decision support programs) or fully autonomous (such as software programs that provide a definitive diagnostic decision), these rapidly evolving systems should augment and scale the capabilities of physicians, the broader health care team, and patients in achieving the quadruple aim in health care.
Although AMA physician leaders considered using the term “artificial intelligence,” ultimately through the HOD process it was determined that the term augmented intelligence more accurately reflects the purpose of such systems, whether assistive or fully autonomous, because they are intended to coexist with human decision-making. As noted in the AMA's most recent report, we are entering what many experts view as the fourth industrial revolution. It is important to update terms to explicitly articulate the expectation that rapidly evolving technologies should complement and extend the work of humans. And, the AMA is not alone in this measured view of what current AI systems in health care are able to do and what the expectations should be for the future development of such systems. The term “augmented” intelligence has become the preferred term among key technology companies, other innovators, and physician AI experts.
As a primary care physician on the front lines, I am providing health care based upon proven science. Augmented Intelligence is designed to bring "big data" to the bedside, thus improving the quality of the scientific basis for what physicians do everyday. Therefore, as my colleagues have eloquently explained above, the AMA must be at the table for these discussions and help those of us at the bedside to make the best day-to-day decisions for our patients.
Thank you to our panelists for joining this week's discussion on the AMA's healthcare AI policy and the process to develop it. To kick off this discussion: how does the AMA develop policy and what is the role of the House of Delegates, the Council on Legislation, the Council on Medical Education, and the Council on Long Range Planning and Development? What roles have they played in the past two years as part of the policy development process for healthcare AI? Did the AMA engage any additional stakeholders or experts in this process?
The AMA develops policy through actions by its House of Delegates. AMA delegates/delegations, the AMA Board of Trustees, AMA Councils and AMA Sections submit resolutions and reports for consideration at each meeting of the AMA House. The Councils mentioned in the inquiry have worked to develop reports that meaningfully define healthcare AI and the role of physicians. These reports have been adopted as AMA policy.
As an example the Council on Legislation, that is advisory to the AMA Board of Trustees, through meetings with subject matter experts over several months led to the Board report that was adopted at the recent AMA House. The latter report provides a clear set of policy positions to ensure that the use and payment of AI systems advance the quadruple aim.
To expand a bit further, the Council on Legislation began consideration of health care AI in early 2017. Other leadership councils have either developed reports, such as the CLRPD Report and the CME Report and policy, or they have been briefed on the draft reports on health care AI and in some cases provided feedback, particularly on the policy recommendations. The types of experts consulted included a number of physician AI experts as well as experts from other disciplines including computer science, law, ethics, and payment.
As Dr. Heine stated, AMA policy is established by the House of Delegates (HOD). The HOD is comprised of proportional representation from every state medical association and major national medical specialty society. The HOD is convened twice a year. The AMA's Board of Trustees put forward a report and recommended policies in 2018. This initial report was considered and the recommended baseline policy was adopted by the HOD in 2018. At the same time, the HOD considered a report on AI and healthcare from the AMA's Council on Long Range Planning and Development (CLRPD). This report is the basis of a free introductory course on healthcare AI available on the AMA's EdHub (and CME is available). In 2019, the Board with extensive input from the Council on Legislation, many AMA staff experts, as well as a broad cross-section of external experts put forward another report with additional recommended policies. This was also adopted by the HOD in June of this year. In addition, the Council on Medical Education also advanced a report on healthcare AI and continuing medical education and professional development. This report was considered and the recommended policy was adopted.
Pending
Interesting question, there certainly is great appeal to dedicating additional resources towards the field of augmented intelligence in healthcare. This is especially true since we are only limited by our imagination in anticipating what unique challenges lay ahead in the development and implementation of AI tools, in addition to the already extensive literature that has highlighted outstanding questions on data quality, clinical validation, liability, IP, and more.
For some years now, the AMA has had the foresight to convene AI working groups that have built the foundation for the AMA's current AI policy. A necessary prerequisite to establishing a dedicated AI workforce in the future would be to engage in serious capacity building within our organization, which I believe is already underway. As we educate and train our physician community on the key definitions, themes, and challenges in AI, I hope that a diverse community of passionate leaders will emerge that will help drive the next stages of our AI programming and policy, and perhaps at that point we will be able to assess our abilities to form a dedicated council with a clear scope and thorough expertise around the subject.
Would love to hear others' thoughts on this, and how our AMA can better equip itself and the community of organized medicine for the future of AI.