Augmented Intelligence. Machine Learning. Artificial Intelligence. Deep Learning. How do we define these terms? What are their differences? Can AI gain scale in health care without a common terminology? Explore these questions and more during this AI standards discussion!
How should the AMA engage with standards-setting bodies?
What are you / your organization and other entities doing to promote broad engagement of individuals from varied backgrounds and disciplines to engage in the standards development process?
At Consumer Technology Association (CTA)®, we promote both engagement within our standards program and our standards themselves through a number of different platforms. We provide opportunities for engagement during our semi-annual Technology & Standards Forum, where we bring other industry leaders to discuss the direction of industry and actively work on standards development. We use communication channels such as the CTA Blog (including a feature of our AI Standards Committee Chair) and our flagship magazine i3 (including an article on our AI standards work) to share with the public information about our ongoing work. Lastly, at CES® – the world’s largest and most influential tech event – we bring together stakeholders from across industries to discuss and contribute ideas as it relates to standards work. I encourage any industry professional to join us at CES 2020 this January!
cta.tech/Events/Event-List/Tec...
Should state regulators use the same standards as those developed at the federal level or are there additional/different standards they need to rely on?
How do US and regulators utilize AI related standards like the FDA for medical devices or the Federal Trade Commission? Are health systems utilizing standards as they set-up new AI initiatives to design, develop, validate, and deploy AI systems?
Are there existing standards to support transparency? Are there existing standards to address the varied forms of bias and potential for inequity? Are they sufficient?
Addressing the topic of transparency will be a critical issue for the growth of applications of artificial intelligence in health care. To the best of my knowledge, I do not believe that there are any standards addressing these topics today, but in the initial discussions of Consumer Technology Association’s (CTA)® Artificial Intelligence (AI) in Health Care Working Group, we have added all of these concepts to our consideration around the trustworthiness of AI in health care. Specifically, the Working Group has noted that the needs and requirements around transparency will be closely tied to the use case and end-user of the AI in health care technology. I anticipate that discussion on this topic, including bias and inequity, will be critical as our Working Group continues to move forward in the standardization process.
Are there existing standards or needed standards related to AI systems and data, privacy, and cybersecurity? How about user-centered design? How does standard setting impact policy decisions with these issues?
We actually have done some research on using a technique called "Federated Learning" to train AI models in situations where the data is protected or sensitive in nature (such as healthcare). The basic idea is that you can train an algorithm on data across many institutions, but the data itself never leaves the individual institutions-- only the algorithm leaves. This potentially could unlock large datasets that are current inaccessible due to legal, privacy, or technical constraints. Google uses this approach to develop the algorithms that autocomplete the text messages on your cellphone. We've also been working on methods involving homomorphic encryption which allows us to do calculations where everything is under the veil of encrypted data and encrypted computation. To get a better sense of these technologies you can see our blogs:
Today, my organization Consumer Technology Association (CTA)®, announced the new release of our Guiding Principles for the Privacy of Personal Health and Wellness Information. While these voluntary principles address how companies should treat all consumers' personal health and wellness information, they are particularly relevant to AI systems that will run on data. The principles were developed by industry consensus of CTA Members, including IBM, Doctor-on-Demand, Validic, and Hummetrix, and are intended to retain flexibility on how they are implemented for differing technologies, products, and services. This flexibility will be critical when considering the growing applications for artificial intelligence in health care.
cta.tech/cta/media/Membership/...
Steering Clear of the Cliff: How Innovation Can Help Heal Our Health Care System
Morning Consult / September 12, 2019
From CTA President and CEO Gary Shapiro: Health care data is sensitive. Our nation needs a pre-emptive, technology-neutral federal privacy law for health tech to ensure consistent protections for consumers. Such a framework should be risk-based and flexible and rely on time-tested principles of transparency, consumer choice, security and heightened protections for sensitive data. Consumers will benefit from innovation only when they trust companies with their data — and tech companies will succeed only if they earn and maintain that trust.
morningconsult.com/opinions/st...
Kerry's comment about the Guiding Principles for the Privacy of Personal Health and Wellness Information reminded me: One thing I'd like to add here is that we're crossing into a world where standards built for one purpose may touch areas they never would have in the past. Privacy and cybersecurity is a good example--there are standards that exist now for the secure development of software, the secure deployment of technology, and the security processes of an organization overall. These, of course, apply to AI as well--but the real questions are where these standards may fall short on AI, or even if they fall short at all!
Why are standards to important, particularly when talking about AI in health care?
What I like about standards development is that it is a collection of key stakeholders brainstorming and setting the bar for "what good looks like." Standards developers are often very engaged on key topics, because, after all, all of us are consumers of healthcare and have an interest in quality care.
A few years ago, I remember visiting my mother in the hospital, and I saw she was on one of my devices; just last year, I was in the hospital and was on a competitor's device -- both experiences gave me flashbacks to standards development meetings for those devices, and it comforting to know that worldwide experts helped establish the quality bar for those devices.
I really like Pat's answer. One thing to add from my perspective is that standards are really our own objective way to scrutinize not just others but also ourselves. Although regulations may be frustrating at times, I've always felt that they actually give the developer more reassurance that every reasonable precaution or test has been done prior to something going to the patient bedside. Good process and good standards leads to great results.
Ketan, I am tagging Dr. Mark Sendak who is engaged in interesting work along the lines that you have raised.
There's a really interesting program developed at the Carle Illinois School of Medicine (medicine.illinois.edu/) which combines engineering classes/projects with the traditional medical school curriculum. It almost makes me want to go back to medical school! (Almost.) The nice thing is that in addition to AI training, they have a broader focus on training physicians that are technically savvy across the engineering spectrum.
What is the anticipated impact if standards vary among standard setting bodies? Are there efforts to harmonize or organize standards development across health care and cross-sectoral? How can the recent request for information issued by the U.S. National Institute of Standards and Technology be utilized to advance harmonization of standards where appropriate and differences in standards, where appropriate?
Hopefully the standards don't vary too much -- I often find that people can (mostly) agree on a set of good practices -- people have similar views of "what good looks like" and differences are due to unique personal experiences.
Having said that, I also notice that different standards are used for different types of products or different markets -- there are sometimes different expectations for a product in China or India than for the US or Canada. Note that I said "sometimes", as I find that even with vastly different markets, most of the content and intention is the same..
As a general principle, most standards setting bodies strive to avoid duplication. However, I think that when we consider the applicability of standards of artificial intelligence (AI) in health care, there are so many different angles and perspectives that there is likely room to have multiple standards. My organization, the Consumer Technology Association (CTA)®, strives to communicate with other standards setting bodies and similar organizations in formal and informal ways (such as this discussion!) to identify needs and synergies that help drive our standards work. We also welcome participation from other allied associations in our standards program, so if you would like to learn more on how to get involved, please reach out (kharesign@cta.tech)!
Standards are absolutely necessary to support the variety in practice. It will also help with alignment. It will be important to ensure the standards reflect the patient/provider mix and include smaller clinical practice types, rural practices, and practices that vary by payer. As we continue to see more focus on social determinant data and its impact on health, we will need to find a way to safely incorporate it into standards, to avoid bias, and be inclusive of different patients mixes that are affected by chronic disease complexity.
Are there existing standards that apply to AI systems? Are these sufficient? If not, in what areas should new standards be developed and why?
I am unaware of any current existing standards, specific to AI, that are being utilized by state medical boards regarding the implementation and utilization of AI within healthcare. Generally, state boards have taken the position that technology is just a modality to be employed in the practice of medicine, not something that needs to be regulated differently. However, there is a recognition that with the potential for abuse or circumventing current regulations through the use of new technology, it may be necessary to update or modernize existing regulations.
In our early work at the FSMB, some within the medical board community argue that the regulation of AI in healthcare is significantly behind, finding the existing regulatory framework regarding medications and medical devices are insufficient to regulate the use of AI in healthcare.
Here are some of the areas that the FSMB AI taskforce may be studying in the coming months:
• Defining the standard of care when implementing AI
• Use, storage and sharing of medical data
• Data and algorithmic transparency
• Regulation of the software vs device
• How will adverse outcomes be measured and assigned (data source, software, device, institutional responsibility vs provider responsibility) in cases where AI is utilized
• How will medical ethics integrate with AI? As one member of the FSMB taskforce recently commented, “Because machines are not moral agents, as these algorithms are developed, and overseen, who will be responsible for the outcome of the decision-making process”
Even outside of healthcare, there is not a consensus there needs to separate and distinct regulatory of artificial intelligence, or should we proceed with series of use-case specific rules that deviate from general law and policy. Martin Ford’s recent book ‘Architects of Intelligence’ explores this issue in the interviews conducted with leading AI minds, and I suggest those interested should check it out. book.mfordfuture.com/
Absolutely there are. Any software standard that exists applies to AI—AI is first and foremost software. Where you might see deficiencies are in the specific algorithm families.
Standards broadly do two things: they are a signal to the marketplace that a thing meets certain performance or safety requirements (“That meets our standards.”), and they allow for interoperability between devices or systems (“It uses a standard connection.”) Both of these are obviously important. In today’s digital healthcare environment, you need both to provide high quality care to patients—manufacturers use standards to signal to their clients that their equipment will help them do so.
With this in mind, I would suggest that the interoperability standards are mostly taken care of for us thanks to the underlying frameworks that exist for software and software delivery (quick shout-out to DICOM for it's current work on standardizing how AI results are displayed on a workstation!). That leaves standards that signal performance and safety levels--which will need to be tied to what it is the specific algorithm is doing, how it does it, and who it is used by.
The responses of Eric and Zack are fascinating and remind me of a common observation made by Pat Baird: people use the same word to mean something different. In this case, Eric, it seems like you all will explore clinical standards of care, but have added in a discussion of technical standards. Whereas Zack, you are pointing out, quite rightly, that there are many technical standards that already govern software generally and software as a medical device as defined by the Food and Drug Administration. And, this is where it is really important for all stakeholders to understand vis-a-vis technical standards--these should be readily available and clear. And, where certain AI systems (for example machine learning systems) require additional standards we need to surface what is different about software developed using ML, for example. Did I read this right? Thoughts?
Yes I think you read it correctly. I also think that one of the challenges will be that although we know of many good practices in the development and use of ML, there will be failure modes that we didn't think of.
For example, we know how to write good software. Good isn't perfect, so we also need to think about how the software can fail. But ML can fail in unusual ways that we might not have thought about before.
One example that I read about (afcea.org/content/ai-please-ex...) was someone wanted a ML application that could tell the difference between a photograph of a husky vs. a photo of a wolf. The software seemed to be performing pretty well, but as it turns out, all of the wolf photos were on a background of snow, so the software didn't learn the difference between a husky and a wolf, it learned the difference between snow and not-snow. This wasn't due to a software bug, this wasn't due to running the software on the wrong operating system, this was due to a pattern in the training data that the software developers didn't notice...
As mentioned earlier in the discussion, within the Consumer Technology Association (CTA)®, we have work underway to horizontally define definitions and characteristics for artificial intelligence (AI), as well as to address specific definitions and characteristics of AI in health care. We also have work planned in 2020 to begin to explore the impact of trustworthiness of AI in health care. Given the scope and impact of the topic of AI, there are like going to be many additional areas of standardization that need to be explored. We also anticipate there will be a need to continually review previously published standards to ensure their continued applicability as AI grows and matures as a technology.
Connect
I think one critical engagement is not only with the standards-setting bodies, but also with the device manufacturers. In my experience, I've seen device manufacturers truly leading the way in best practices for AI in medical imaging. Not only do these companies have the expertise in AI, but they also have tried and true methods of quality system management (fda.gov/medical-devices/postma...). Getting the AMA, standards-setting bodies, and the device manufacturers in sync is essential.
Pending
As a participant in Consumer Technology Association (CTA)® Artificial Intelligence (AI) in Health Care working group, we encourage AMA’s continued active engagement in meetings and standards development. We hope the participation will bring in the perspective of physicians to current work and help to identify additional opportunities and needs for standardization that will help to advance the application of AI in health care. We strongly believe that bringing together diverse groups, such as CTA and AMA, will help to promote innovation and the use of AI in health care.