Augmented Intelligence. Machine Learning. Artificial Intelligence. Deep Learning. How do we define these terms? What are their differences? Can AI gain scale in health care without a common terminology? Explore these questions and more during this AI standards discussion!
What standards setting bodies are involved in standards development for AI systems in health care? Could you describe what your organization (or you) have been doing in the standards development space vis-à-vis AI in health care?
What are some hurdles (and solutions) to guiding the computer scientists to work on solutions we need in the care of our patients? Could we use this or another platform to collect ideas, vet them through a team of MD's, programmers, data scientists, financial/business experts, big health systems and create RFP's for the most clinically useful and commercially viable solutions? Said another way, could we make our clinical wish list, prioritize based on usability and marketability and allow AI/ML vendors to respond to those vetted requests for proposals? We ultimately need 1. Clinically usable 2. Technically doable and 3. Commercially viable solutions. How can we move that way?
Are you suggesting that perhaps there be some sort of wish list that would be updated periodically? Along the lines of "Top 10 Clinical Needs for AI in 2020" ?
Hi Pat - Yes, that is another way to look at the ongoing deliverable that 'health' could present to 'tech'. The process to create the wishlist will take some collaboration. AMA PIN seems like a good place to start!
Clinical wish list is a great start since AI today is looking for problems to solve :-). We then need to further classify the list or "pain point", into "USAGE" - define what is the use case, how does the user-interface look like, does it blend with the EHR or is another pop-up ... . The we should look at the "TECHNICAL" needs - AI algorithm, technology stack, data sources - curated, open, integration with IT, security etc., Finally, "BUSINESS" - does it make business sense to build and deploy this - what's the ROI to the health system, can it get reimbursed, does it help with reimbursement, does it help with avoiding penalties, impact on quality scores/metrics, etc.
So the wish list is a great start, but IMHO the real work starts after that :-)
The AMA welcomes the experts participating on the third panel in a series concerning health care AI. We appreciate the time and expertise of all the panelists. We will delve into the important work of each of our panelists and the contributions that they are making. However, I will kick off the discussion by level setting what our experts mean when they use the term AI. What does AI mean? In your opinion what methods and systems are included when the term AI is used? Are the terms continuous learning systems, machine learning, adaptive systems, and batch learning the same term? How are they related if they are not the same term? Do we need to have clear and consistent terms and definitions when discussing AI as a general concept or it is more essential to have shared understanding of AI systems like machine learning?
AI is used as a general term to address machine behavior and function that exhibits the intelligence and behavior of humans. Under the umbrella of the term AI, we should include several different algorithms (e.g., machine learning, supervised learning, unsupervised learning, reinforcement learning, classification, regression, and clustering) and applications (e.g., augmented intelligence, chatbot, machine vision, data science, natural language process, and robotics).
When considering things such as standardization, it is important to have clear and consistent definitions for all terms, including machine learning. However, at the same time in the standardization community, we need to be able to be flexible and reactive to the rapid changes that are occurring in this space and adapt our definitions as use cases and applications change.
Machine learning is enabling technology of AI that provides systems, using algorithms, data, tools, and techniques; the ability to learn and change without providing/programming an explicit mathematic model for mapping input to output. The concepts of continuous learning systems, batch learning, etc. represent types of machine learning.
AI is a tricky term because it has been adopted and adapted by so many people for so many reasons, and the academic and philosophical components of the term AI don’t play nicely with the technical and legal components. When we talk about AI in the healthcare industry, we don’t usually mean “machines that have the ability to reason”—although that’s just as good a definition as any. We usually mean “software that can make decisions well under uncertainty.” So many software algorithms can fall under this definition!
Nearly anyone running a software algorithm can claim, in some way, to be using “AI” this way--it is more and more important that the terms we use to discuss the function of the algorithm driving the software are well defined. You see it now when many organizations conflate AI and Machine Learning to try and cut out all the fluff of what Machine Learning isn’t—but in some ways it exacerbates the issue.
Problems need to be addressed when it comes to certain aspects of machine learning and other cutting-edge algorithms. Continuous learning (which is certainly different than adaptive systems or batch learning!) poses serious questions about software development, software tracking, and bug fixing—not to mention the ethical and legal questions that a forked version of a software might create. But to address these all under the umbrella of AI restricts our ability to address them.
I like how Kerrianne defined machine learning and noted continuous learning systems and batch learning fall under it. That’s the level of differentiation I think we really need—and that is more important in the long run.
AI is really a blanket term to cover a spectrum of approaches where computers can make predictions about the real world. As Zack notes, nearly anyone running an algorithm can claim they are doing “AI”. In the past, simple “if this, then that” statements crafted by human-based logic might have been considered AI. More recently, deep learning and reinforcement learning techniques use statistically-based logic with remarkable and impactful results.
The difference today is really that the algorithms are being adapted or trained rather than explicitly programmed. By observing millions of examples these AI algorithms can highlight statistical patterns which help separate data into two or more classes—cats from dogs, hazardous from safe, disease from normal. Humans naturally perform this categorization and are encouraged in school to compare and contrast what they experience in order to better navigate the world. For example, my children were encouraged to do this in the show Sesame Street when one of the characters sang “One of these things is not like the other; one of these things doesn’t belong.” Highlighting the similarities and differences between objects in the real world is a necessary step in any general intelligence.
As Sylvia asked, I think it is essential to have a shared understanding of AI systems like machine learning and deep learning to get a better appreciation of how these systems are “intelligent” so that we can realistically define the capabilities and limitations of what we expect from the technology. Although I don’t believe the technology currently replaces human reasoning, its ability to classify differences and similarities in a repeatable and scalable way will make AI an indispensable tool for physicians to augment reasoning.
In one sense, artificial intelligence has been part of the diagnosis process for many years as physicians have used technology to distill patterns in data or conditions and make appropriate decisions on how best to provide safe and effective care to the patient. While ‘AI in medicine’ has garnered much attention, it is not much more than the application of very sophisticated and powerful algorithms that rapidly identify patterns or track diagnostics. As Zack points out, its important to cut out the fluff when one claims to be using “AI” and distill exactly what the programs are doing as definitions and standards are being considered.
The creation of common terminology and fostering an understanding of what these terms actually mean in the context of a physician-patient encounter is of utmost important from a regulatory point of view. State medical boards, in fulfilling their duty to protect the public, face complex regulatory challenges and patient safety concerns as they review existing regulations and standards, historically derived from the context of a ‘traditional’ physician-patient encounters, and adapt definitions to care delivery models that involve advanced technologies. As with any technology, there should be parity of ethical and professional standards applied to all aspects of a physician’s practice. Varied definitions and improper use of certain terms exacerbate the difficulties regulators may have when trying to distill how the technology impacted the safety and legality of the practice of medicine.
The history of federal/state approaches to defining and regulating telemedicine may offer salient lessons on the importance of terminology and the impact the lack of common understanding may have on the integration of AI technology into the clinical setting. If there is a lack of standardization early on, there may be divergence in regulatory approaches and compliance, which may ultimately impede implementation of technology to its full potential.
I couldn't agree more with the point you make here. A divergence in how terms are used can absolutely drown new concepts in bureaucracy. Cybersecurity is a great example of this, too.
Best to decide, as accurately as possible, what we really need to concerned about (an example--is it AI, or is it specifically Continuous Learning Algorithms?) and use them consistently.
This is an extremely helpful description of the current state with regard to how the term can mean very different things to different audiences or individuals. You also outline a pathway for us to drive towards improved clarity.
Extremely important point based on lack of standard terms in digital health. The term Telehealth is a perfect example where federal agencies and states define the term differently with implications for varied regulation and also impacts payment policies as federal health programs and commercial payers lack a shared definition. This drives confusion and misunderstanding for all stakeholders. Having shared definitions and process to iterate is important.
It's all about the algorithms that are used by the AI engine. These could be rule based or adaptive. When it comes to what is AI - Machine learning, Deep learning is. Is logistical regression AI? Yes. It is since it helps with classification which is key to ML. So should we then just bucket all the stuff from statistics into AI? We definitely need to understand and collectively define what is AI and what constitutes AI. This will help us build a baseline on which we can build out the right policies, regulations, payment models, clinical/evidence generation etc.
Many thanks to the expert panelists for participating in this discussion- we value your insight and look forward to the ongoing conversation. Building on the sentiment of the importance of having a shared understanding of terminology in the AI space. Can you provide examples of applications of AI system - including the type of AI systems being used - in health care research, health care administration/business operations, population health, as well as patient and clinical support (screening, diagnosis and therapeutic)?
Thanks for this question! Happy to share a few "real world" examples that I've seen, and welcome others to chime in with more:
AI is helping us diagnose—and therefore treat—disease earlier and more accurately. AI is being used to map and decode the immune system, similar to the way the human genome has been decoded, to reveal what diseases the body currently is fighting or has ever fought and enable earlier and more accurate diagnosis of disease and a better understanding of overall human health. Peter Lee, Microsoft and Adaptive Biotechnologies Announce Partnership Using AI to Decode Immune System; Diagnose, Treat Disease, Microsoft (Jan. 4, 2018), blogs.microsoft.com/blog/2018/...
AI is allowing us to deliver personalized care specific to a population group, targeting interventions based on the individual patient and disease subgroups. news.microsoft.com/transform/v...
AI is reducing costs by answering questions like “who might get sick?”, “how sick will they get?”, and “how can we optimize care for better outcomes and cost efficacy?” AI-driven prediction platform helps health systems identify population health risk, optimize clinical outcomes and operational efficiency across the care continuum. customers.microsoft.com/en-us/...
AI is empowering clinicians to intervene with patients proactively and, in one case, successfully reduced adverse events outside a hospital system’s ICU by 44%. azure.microsoft.com/en-us/reso....
AI-driven operational analytics helped reduce the incidence of hospital-induced infections by 20%. enterprise.microsoft.com/en-us...
Several dozen applications have already been FDA-cleared. In fact, the FDA has solicited comment on Software as a Medical Device (SaaMD, fda.gov/media/122535/download). The initial use cases for AI were primarily in radiology, where computer-aided diagnosis has been occurring for decades. New segmentation models, based on deep learning topologies, are quite good at labeling regions of interest in a scan. For example, there are models that label the four chambers of the heart in an echocardiogram in order to estimate the ejection fraction. Other models can be used for routine screening of macular degeneration to determine which patients need more aggressive care (twitter.com/erictopol/status/1...). More recently, natural language processing models are being developed to monitor the patient-doctor visit and provide a clinical note synopsis based on the discussion—freeing the physician to spend more time with the patient and less with the keyboard. Personally, I see the technology as “putting the doctor closer to the patient” rather than “replacing the doctor”.
Many of the applications of AI in healthcare that currently at the deployment stage can implemented without much regulatory concern, as they tied to workflow and predictive assessments. In short, they assist in analyzing data and provide a physician or healthcare provider with various course of actions treat the patient.
As more complex algorithms are introduced into the clinical setting, it may be increasingly difficult for physicians to comply with core regulatory and ethical requirements established by state regulations. Additional systemic factors may influence the creation of a treatment plan, and I can envision that as AI becomes deployed in a more clinical context there may be increasing tension between the technology and control over the overall diagnosis and treatment plan.
The ability to understand how the algorithms are developed, used, and overseen will become a critical question for regulators. Regulators must be able to assess the impact the AI systems had on diagnosis & treatment and ensure that that the health professional is properly trained and is the one taking the final decision of AI systems during the diagnosis & treatment and that patients are properly informed of treatment decisions.
The impact AI will have on the regulation of the practice of medicine should not be overlooked. There is great potential to use artificial intelligence as a response to many of the critiques of the regulatory process and improve the licensure and disciplinary process. For example, AI will allow state medical boards to review and analyze key data regulatory data and expand their understanding of their rules and regulation, as well as construct proactive strategies for modifying risks by identifying licensees who are at risk. AI may also help to facilitate and expand collaboration between state medical boards with medical UME and GME programs, leveraging the combination of technology and collaboration to develop innovative strategies for protecting the public.
In a recent opinion piece at the Regulatory Review (theregreview.org/2019/08/12/co...), UPenn Professor Cary Cary Coglianese cites many uses of AI in government and argues that AI can be accommodated into administrative practice under existing legal doctrine in such a way that “in the not-so-distant future, certain government benefits or licensing determinations could be made using artificial intelligence.” I would be interested in hearing from others on what areas of regulation they feel could be improved by AI and their thoughts on Professor Coglianese’s conclusion.
Expanding on Nathan's and others' examples, could some of the folks who design, develop, and validate systems elaborate on an example and walk us through: 1. What AI systems are being used; 2. What are specific application(s) 3. Are these assistive or autonomous systems 4. What validation was done to ensure safety, efficacy, equity? 5. What are the conditions of deployment? And, can this system be easily transferred elsewhere or what requirements/standards/procedures would be needed?
Eric, what are your views in the AI systems that are under FDA regulatory authority as well as those that are being deployed by health systems? We hear a great deal in popular media about AI system clinical applications, but you have identified areas where such systems may be far more common. Are these also potentially risky if not assessed for bias, fairness, and equity?
Regulation for the interest of public safety requires the ability to understand the algorithms and how they operate, both the product level and the practice level. Some of the issues of shared concern, and not only to medical applications but any public facing application, relate to transparency, privacy, bias. These are not new concerns in the regulatory sphere, but there may need to rethink how to effectively identify or address them.
The FDA has an important role in regulating some aspects of medical AI, especially if it is integrated into software used in a clinical setting. Regulation of the practice of medicine is an issue outside the FDA’s normal domain or experience and is the domain of state medical boards. I cannot comment on specific products currently under review by the FDA, but it is important to underscore the point that that coordination between federal and state agencies, as well as industry, on the foundational principles being discussed in this type of forum will be increasingly important in the near-term.
As we look at ways to revise its paradigms of regulation, state regulators could play an important advisory role and provide meaningful comments during the FDA or review processes to identify concerns that may develop as a product is deployed into a clinical setting early in a product's life-cycle. Regulation in the public interest must look to be proactive if it is going to meet the challenges of rapidly evolving technologies.
Eric, an opportune time to ask this question. It turns out the the U.S. Administrative Conference's Chairman "is exploring the growing role that AI, such as machine learning and related techniques, is playing in federal agency adjudication, rulemaking, and other regulatory activities. A major component of this initiative will consist of a report that a team of researchers at Stanford University Law School and New York University (NYU) School of Law will deliver to the Office of the Chairman. This study will consist of multiple parts. The first part will map how federal agencies are currently using AI to make and support decisions. A second, related part will extend this map by using a sophisticated grasp of AI techniques to highlight promising potential uses of AI in federal agencies. The final part will address how these uses of AI implicate core administrative law doctrines, such as the nondelegation doctrine, arbitrary-and-capricious review, due process, and rules governing reliance on subordinates for decisions." The hope would be that everyone is using standard nomenclature and a defined set of terms. And, they would reference the work the exists and this ongoing by the standards setting bodies like IEEE, CTA, AAMI-BSI, ISO, and others. So we are having an apples to apples conversation.
Thank you for the question Sylvia. As described, AI is very broad, our organizations use data mining to inform calculators that help to suggest care to providers. Most of our integration and use are focused within our EHR system, but sometimes includes integration through other technology partnerships, like FHIR.
Connect
The Consumer Technology Association (CTA) has been working on AI standards for non-medical and medical device applications. cta.tech/News/Press-Releases/2...
The International Organizations for Standards ("ISO") is working with the International Electrotechnical Commission ("IEC") on developing more than a dozen broad AI standards across all industries (it's interesting to have a healthcare point of view in the meetings while talking to someone working on self-driving cars...) iso.org/committee/6794475.html
ISO/IEC are also starting to look at what would be needed for medical-device specific standards, and will be discussing that in the next few months.
IEEE is also working on medical-device AI standards as well. Their P2801 project is looking at quality management for datasets, and P2802 is about terminology. sagroups.ieee.org/aimdwg/
Pending
Recognizing the transformative impact AI may have on healthcare, the FSMB has taken a proactive approach to this issue.
In November 2018, the FSMB and the law firm of McDermott Will and Emery sponsored a day-long conference on artificial intelligence in health care held in Washington, DC. The conference specifically focused on the current role of state medical boards, discussed how artificial intelligence can impact the expectations of patients, how physicians may use artificial intelligence in a clinical setting, and began the conversation about what future regulations may be needed to address systemic changes throughout healthcare.
Following up on the success of this program, FSMB Chair Scott Steingard, DO created the AI Taskforce which will lead the FSMB’s investigation of the host of complex challenges that the integration of artificial intelligence into health care presents. Unlike other committees of the FSMB, the sheer variety of issues presented this group will not have a specific set of policy recommendations for consideration at the 2020 FSMB House of Delegates. Instead, the charge of this taskforce is intended to be holistic, and a for this taskforce to create a forum for conversations and discoveries that will result in educational resources for state medical boards and the public in general.
Specifically,this taskforce will:
• Evaluate the intersection of care delivery models which utilize artificial intelligence with existing FSMB policies, and provide guidance to the Board of Directors on areas where modification or revision of policies may be necessary
• Create a public-facing platform to provide educational resources to state boards and the public, focusing on emerging technologies that may impact the practice of medicine and safe delivery of care
• Identify opportunities to collaborate with interested domestic and international stakeholders to develop policies and standards reflective of regulatory best practices
Pending
In Spring 2019, the OECD released its Principles on Artificial Intelligence. Boiled down, the OECD principles are (1) People first; (2) Respect of rule of law; (3) Transparency; (4) Vigilant Lifecycle Assessment; and (5) Developer and Deployer Responsibility.
Earlier this year, the European Union’s High Level Expert Group (HLEG) issued general guideslines as well. These guidelines focused on the idea of a human guarantee in a three tiered system of oversight.
1. Human-In-the-Loop (HIL) : human intervention in every decision cycle of the AI system
2. Human-On-the-Loop (HOL) : human intervention during the design of the AI cycle
3. Human-In-Command (HIC) : capability to oversee the overall activity of the AI system
At a recent conference where I was a panelist, French colleague Cécile Théard-Jallu posited that the objectives of the EU guidelines could be met in the following fashion.
(1) Assess the level of implication of AI systems during the diagnosis & treatment and ensure that (i) patients are informed in advance and (ii) that the health professional is properly trained and is the one taking the final decision
(2) Foster the exercise of second human medical look at the request of a patient or a health professional (possibly through telemedicine)
(3) Establish targeted and random verification procedures for anticipating, managing and mastering options offered by AI tools by creating independent internal and external supervisory bodies
It is important to note that these guidelines are non-binding and lack the force of law. However, they are important reference points for where others are and how we may be able to approach AI related issues within the United States.
Connect
Thanks, Pat! In February, the Consumer Technology Association (CTA)® launched our Artificial Intelligence (AI) in Health Care Working Group. At its inception, the group had some 30 participants. Since launching the group, it has grown to include 46 organizations. We meet regularly to advance our work on definitions and characteristics of AI in Health Care. In 2020, we anticipate the Working Group will shift its focus in to address the topic of trustworthiness.
We believe standards are critically important since they allow industry to collaborate to address challenges in a specific subject area while also promoting innovation. In some instances, standards may also help to avoid regulation, by providing industry-led solutions to problems. The lack of standards could potentially slow the adoption of AI in health care due to a barrier of adoption that can form.
Connect
AI regulations and rules are being defined in many countries and organizations. Majority of them are not focused on healthcare per se, but help us define a baseline from which we can derive healthcare centric principles.
Here are some examples -
GDPR - Data sanitization, Right to explanation
G7
National Science and Technology Council (US)
Royal Statistical Society (UK)
Association of Computing Machinery
OECD
AMA
International Telecommunications Union/World Health Organization
Pending
The American College of Radiology has a program called Certify-AI which is designed to be a neutral assessment program for AI models in medical imaging (acrdsi.org/DSI-Services/Certif...). They also have similar programs, such as Assess-AI (acrdsi.org/DSI-Services/Assess...) which provides longitudinal evaluation of AI algorithms that have been already deployed.