Augmented intelligence (AI) in health care is complex and changes in this space are rapidly evolving. This discussion is one of the initial steps in the AMA’s efforts towards educating physicians about the changes AI systems and methods will likely bring to clinical practice and how they can leverage them to support and achieve the quadruple aim of health care. Experts in the field will cover the basics of AI including terminology and trends, share use cases, and address key public policy issues.
Are there AI systems that are evidence-based and currently being used in practice? (Feel free to address a range of examples from different sectors and applications, including research, health administration, business operations, population health, clinical use cases, or other areas.)
What tools or information should patients and consumers have so they are able to rely on the safety, effectiveness and equity of AI systems, particularly those with clinical applications?
Before the patients should trust the results we do need government agencies to help verify but we need medical specialty societies to evaluate the evidence and publish those assessments. Those will help payors determine coverage. Then patients will initially rely upon what their own physician is saying in their experience. My fear us that we need to disseminate this knowledge of validity to the clinician faster than has been done with results of Clinical Trials in the past.
Agree with Dr. Repka's assessment. A platform could possibly present the strength of evidence in a way that is easily understandable for patients (and physicians). AI applications get registered to that platform and the evidence gets evaluated independently by a panel of experts, along with risks and benefits. Done on an ongoing basis vs monthly or quarterly. There are pros and cons of that approach but alternatives may not be better and would be timely (e.g.pubmed search by physicians to independently review evidence).
What roles do government regulators and standards setting bodies have in supporting AI use in health care? What are your thoughts on the role of the Food and Drug Administration (FDA), the Federation Trade Commission (FTC), and the National Institute of Standards and Technology (NIST)? And, in the standards arena: AAMI, ISO, IEEE, and CTA, for example?
To avoid a backlash and lose all benefits of AI, we need to do its introduction and deployment right. That means be maximally transparent about the benefits and risks of AI.
This means that there is agreement on words and terms, and on processes to measure and validate safety, efficacy and equity of AI.
Given that a wide spectrum of healthcare AI exists, from research AI and minimal risk of patient harm, to autonomous critical care AI with high risk of patient harm. Transparency and accountability are essential and then regulation and standards, as well as the involvement of specific regulatory bodies should be commensurate with risk.
But it is paramount that there are standards for words, terms, and processes for validation as soon as possible.
1) I agree with Dr. Abramoff's point here of supporting the setting up of a common language to evaluate the benefits and risks of AI in an effort to optimize safety.
2) Support growing collaborations between health care sectors (clinicians, nurses, other health care providers) and AI/tech through grants, multi-disciplinary workshops so that the discussion is real and relevant. This could also take the form of programs at schools and universities, etc. Ensure access to those opportunities in a way that is equally distributed to maximize access for people (and hospital systems) in different geographical areas and socio-economic backgrounds (equity/fairness discussion previously).
3) Think ahead of time about the impact of AI systems on job loss and creation. Work with institutions to create plans for re-education and retraining where the loss of jobs is expected. We need to lead this AI/data revolution in a way that is socially responsible.
The question about regulatory bodies is quite important. To inform the discussion, it is relevant to ponder the FDA’s evolving role in AI algorithm oversight.
Medical devices have been regulated by the FDA since 1976, which includes certain types of software referred to as software as a medical device (SaMD.) Basically, SaMD is software used for medical purposes without being a part of the hardware itself. This differs from software in a medical device (SiMD) where the software is integral to the hardware itself. The FDA has long provided oversight to SaMD informed by guidance from the International Medical Device Regulators Forum (IMDRF). AI/ML algorithms are different than more traditional CDS or diagnostic support software, a circumstance which was partially addressed in the 21st Century Cures Act.
The 21st Century Cures Act, passed in late 2016, removed certain types of clinical decision support (CDS) software from FDA oversight. Did the exclusion criteria exclude AI enabled CDS software from FDA oversight? To be excluded from FDA oversight, the software which supports or provides recommendations must enable “health professionals to independently review the basis for such recommendations that such software presents”. This independent review provision is critical in answering the question of exclusion of AI enabled SaMD. Since AI/ML algorithms, by their nature, inform decisions based on complex algorithms (i.e. – convolutional neural networks), their “black box” nature precludes the “independent review” as described in the 21st Century Cures Act. Which keeps them under the purview of the FDA. How that oversight will occur, especially when considering “locked” versus “continuously learning” algorithms remains an ongoing question, for which the public, developers and physicians are seeking answers. I am running out of space to comment here, but trust that the AMA, including the DMPAG, are evaluating this evolving FDA regulatory guidance carefully.
The future of AI innovation depends on continuous learning and the availability of large and diverse data sets (a requirement to avoid bias). Currently the necessary data is not readily available for continuous learning. The concerns about data security, liability and accountability are very real but it needs to be solved through collaboration. Can data be secured at source and still learning happen across different data sources. It is potentially possible and companies are looking at privacy preserving machine learning methods. fortune.com/2018/12/27/ai-priv...
FDA's evolution of the SaMD for AI proposal could help address some of this. NIST could also provide guidance to standardize secure sharing and federated learning.
Incredibly important discussion. Today, the AMA submitted comments to a request for information issued by the National Institute on Standards and Technology related to health care AI standards. And, on Monday the AMA will submit comments on the FDA's Discussion Paper concerning AI/ML in the context of Software as Medical Device and proposed Precertification Program. There remain many important questions and decisions that need to be made to ensure the standards and regulations address key questions related to safety, efficacy, and equity. We do need to begin with consensus on essential terms like machine learning, discontinuous learning, locked models, automated or autonomous and assistive, for example. Shared understanding of nomenclature is an important start and cross-sectoral agreement would be ideal. Time to roll up sleeves in earnest and build a shared foundation on key policies and infrastructure needs.
federalregister.gov/documents/...
AMA policy provides that we will promote development of thoughtfully designed, high-quality, clinically validated health care AI that is designed and evaluated in keeping with best practices in user-centered design, particularly for physicians and other members of the health care team. How should user-centered design be championed, particularly in the context of IoT?
Supporting here the idea that human-centered design is key to the development of AI solutions in health care. Two of the main benefits would be:
1) optimization of engagement;
2) optimization of outcomes we intend to impact.
How do we get there? I would say from the physician's adoption standpoint:
1) Keep goals as relevant as possible to challenges currently experienced in health care. For example, a solution that would decrease the administrative burden and after-hour work physicians do on charting. Hence, trying to solve for problems that are meaningfully relevant to our current challenges will probably increase our ability to engage and focus our time and energy.
2) Go from trust to building more trust. Trust will be important for buy-in on the part of physicians (and customers too obviously). For example, when partnerships are contemplated, build relationships with partners that have better brand trust.
3) Increase transparency about flows of data for both physicians and patients.
4) Continue to increase the level of collaboration between industry and physicians so that it becomes part of the culture. For example, UCSF has developed a collaborative space between companies, investors and physicians (healthhubsf.org). The effort facilitates a much-needed discussion to support solutions that are relevant to current health care problems (in our context, more user-centered).
Lastly, I would keep in mind that any change in our pretty traditional system will require some thinking. Doing it in a systematic way, e.g. Kotter’s 8 step change model, can probably increase the engagement and ultimate outcome.
I call it "patient design" which is very much different from patient centricity. Patient design means that whatever a company or an organization develops for patients (from processes to technologies and treatments), they should involve patients on the highest level of decision-making. This is why certain pharma companies and regulatory agencies like the FDA now have patient engagement advisory boards on the C-level of the organization.
This method ensures that the end product (even a deep learning algorithm) meets real-life patient needs and every step in the design process is also based on how patients want to use that technology.
User-centered design considers all the end-users and conditions of deployment. Addressing patient experience and patient desired outcomes is an important discussion, but it should not be conflated with broader discussion of user centered design. This is very important in the US where electronic health record systems and interfaces have high friction and have contributed to physician burn-out. The experience of those who use the technology directly and the ease or difficulty of integration have consequences for safety and efficacy as well so the FDA considers this broadly, but not all AI systems are FDA regulated. Dr. Mesko, however, you raise an issue that is important and warrants discussion. And, there are important initiatives to address this issue and these underscore the need to address patient experience and outcomes — which are part of the quadruple aim.
Increasingly, medicine is being delivered in multi-disciplinary teams both in the inpatient and outpatient settings. With physicians at the center of user-centered design, we should also strive to engage nurses, therapists, technicians, EMS providers and others along the healthcare continuum in the integration of AI systems.
The question of user-centered design is important to consider, and the points made in this discussion thread are excellent. One of the challenges faced by physicians is this: the algorithms being created are generally based on use cases, data and outcomes from other places and different populations. This is nobody’s fault; it is just the circumstance.
What is there were a way to engage the users in every step of the AI development life cycle from the beginning. That is, create tools so physicians could create algorithms based on their own data, motivated by their own clinical needs. And behind their own institutional firewalls, so their data is protected. This would require education on the nature of AI, which would involve learning by doing. And include interactive tools to allow active engagement across the AI development life cycle. Done right, this becomes a true democratization of AI.
The American College of Radiology’s Data Science Institute recently announced just such an effort, the AI-LAB. It is worth checking out as a potential model for wide-spread development and sharing of new AI applications: acrdsi.org/Get-Involved/AI-LAB
Hi all, My first time in this forum. There is no AI without data and all the clinical data is with the health systems. Do the health systems have the reason to build their own clinical practice tools? If not who should take on this challenge and who would pay for this work?
The incentive for a health system should be better care, fewer errors and reduced costs, hopefully improving their overall financial and quality pictures. However, I assume many entities, especially in image analysis, will rely on new models of reimbursement that recognize they are providing an improved service. A problem that is evident to all that if you detect more diabetic retinopathy or other eye diseases at earlier time points, yet in many more people, there will be increased costs in the earlier years and savings into the future. The health care system needs to be able to translate that backwards to pay the innovator for their efforts toward improving care. Another problem one can anticipate is there will be many strategies tried and not all will accomplish the goals of better quality and reduced costs, so it is not a simple thing to reward AI just because it is AI.
One technical (non clinical) leader of a prominent health system I spoke with related the increasing war for talent. How do health systems attract computer science and engineering talent capable of developing and deploying increasingly sophisticated applications, especially ones that may employ AI methods? Like startups and big tech, health systems may get caught up in a race for having a prominent "VP/Engineering" which has enough name recognition to get younger developers and engineers in this atypical setting and career track.
Patient data is the much needed fuel for the Healthcare AI training datasets. How do patient rights need to evolve and adapt? Issues include but are not limited to updated consent, “Digital Hela,” privacy, accessibility, confidentiality, re-identification of de-identified datasets by linking to publicly or for-sale datasets, and more? What guidance should clinicians give their patients?
This is an amazing question raising many important issues that are not only related to augmented intelligence but to the cultural transformation of healthcare we call "digital health".
As patients bring their own data and the parameter they measure with their technologies to the medical practice, the first issue is how to protect that data and how to integrate it into medical records in a way that patient rights remain intact. I've seen a good example about how Estonia has been dealing with this.
The second issue is what happens when we start using A.I. on the data patients bring to the table and it results in advanced analytics and clinical outcomes even when the patient is not present. Who has the right to decide which parts of their data can be further analyzed without their consent?
The third issue is patients having access to augmented intelligence themselves without any medical supervision. When I can upload my genome sequencing data with some lifestyle parameters to an online repository where I receive advanced analyses, what if I bring that to my medical professional. Who has the responsibility when making decisions using such data.
E-Patient Dave deBronkart started a movement about a decade ago under the name "Give me my damn data" so then patients would own the data generated by and about them. boston.com/news/nation/washing...
That could be a good first step towards creating new patient rights in the age of augmented intelligence.
This is such an important question and Dr. Mesko’s points are excellent. Rock Health, in a 2017 national survey of 4000 adults found nearly a quarter use some form of wearable device (rockhealth.com/reports/healthc...). At the same time, insurance companies are increasingly offering rewards based on achieving fitness/health goals documented by these devices (jamanetwork.com/journals/jama/...).
Granted, these examples are only a subset of the growing patient data available. But, these trends highlight a simple fact: the volume and potential uses of this data show no sign of slowing. Which highlights the importance of enabling protections for those who provide the data, while not stifling the innovation and improved patient care which this data may enable.
In a separate discussion thread, I mention the role of HIPPA in this space, and the potential need to revisit how those rule may or may not apply in the era of big data. As we speak, such updates are being discussed by the Senate HELP committee as part of a broader healthcare bill focused on transparency (help.senate.gov/imo/media/doc/...). Specifically, Section 503 of the bill would call for a “GAO study on the privacy and security risks of electronic transmission of individually identifiable health information to and from entities not covered by HIPPA”. The draft even refers to recent developments in the use of APIs to access personal, identifiable patient data.
If this legislation advances, the physician community will be an important voice to inform the GAO study and provide meaningful guidance in response.
There are several key elements that need to be thought of when discussing the increasing involvement of patients in the use of their data.
1) Security and privacy: Risk of data breaches and data re-identification;
2) Trust (or mistrust) of the entity collecting and storing the data;
3) Right to ownership vs access. This becomes even more complicated when "raw" data becomes curated through an EHR, radiological testing or complex bloodwork (raw data==> quality data). One can also consider that the patient has paid a cost to the organization for the service provided, with one of the end result (data) being a product of this service paid by the patient.
4) Laws are changing as previously discussed. Europe's GDPR, California Consumer Privacy Act- this is an ongoing conversation.
5) Data has an economic value and can be monetized.
With that in mind, Dr. Mesko mentioned some trends towards "Give me my dawn data". In fact, some companies are trying to address this already:
--CoverUs (coverus.health) brings the conversation back to the patient with regards to their health data. See video: youtube.com/watch?v=d3IjAkl1G7...
--Nebula Genomics (nebula.org/#/): genome analysis WHILE keeping consumers owners of their data, with the ability to rent to pharma.
I expect more will come in this space.
The AMA House of Delegates will consider a report that, among other essential principles, highlights the importance of evidence of safety, efficacy, and equity including addressing bias in healthcare AI. What steps are needed to ensure equity in healthcare AI?
Start with an Healthcare Equity Index to measure the access, journey and outcomes for the most privileged and the least privileged to unpack the inequity. Segment populations by sex, race, age, SES, urban/rural residency, hierarchical-mindset, et al to understand the causes (unrepresentative training data sets, developer biases, etc.) and address them. This will take pre-emtptive thoughtful planning and engagement of a wide range of stakeholders in the eco-system to define and deliver. And the proverbial ounce of prevention will be the less than the pound of cure (or unintended consequences and their costs).
Adding here to Sonoo's great response, I would also think about educating the workforce (physicians, leaders, coders, analysts) on issues of equity and diversity. The processes may subsequently be less biased themselves. At UCSF for example, we've had more opportunities for training related to diversity, equity, and inclusion (differencesmatter.ucsf.edu/div...). These trainings increase awareness about the importance of such issues in health care. By a collective social understanding of the importance of equity, diversity and inclusion in healthcare, it may be more likely to make it down to the machine learning algorithms.
The concept of Equity for AI is linked to Safety and Efficacy, and all three (Safety, Efficacy and Equity) need accountability and transparency. The concept of AI equity emphasizes that the AI needs to be Safe and Efficient for the vast majority of patients, not just a specific subset of patients, including across races, ethnicities, sex, and ages.
And equity, and also safety and efficacy need to be safeguarded during the design, development, validation, and clinical use of AI.
Let me explain with an example of a diagnostic AI: safety can be measured with 'sensitivity', how many patients with disease it diagnoses correctly; efficacy with 'specificity': how many patients without disease ('normals') it diagnoses correctly; and equity with 'diagnosability' how many patients it gives a valid diagnostic result - rather than don't know - stratified by race, ethnicity, sex, age and any other relevant group characteristic.
If you want a diagnostic AI that is 100% sensitive, just have it always output a 'diseased' diagnosis for any patient: the AI is 100% sensitivity but obviously useless (specificity 0%). Similarly, if you want a diagnostic AI that is 100% specific, just have it always output a 'normal' diagnosis for any patient. The challenge always has been to have an AI with both high sensitivity and high specificity.
Equity adds a third requirement to the balance: not only do we want an AI with high sensitivity and specificity, we also want it to work on the vast majority of patients.
There are many ways to address equity for AI, but requiring the three principles of safety, efficacy and equity, ensure it is safeguarded.
Are there AI systems and/or methods that are evidence-based and currently being used in practice?
Hi Ashley. I believe the question can be answered by looking at the type of services provided by the company or AI system. We can categorize different services such as care management, population health, chronic disease management, lifestyle modification tool (e.g. smoking cessation), specific treatment-related, wearables and IoT, etc. There are players within each of these categories that use AI or are looking to use AI as part of their solutions. When it comes to a specific role in disease management and lifestyle modification tools, for example, we are seeing more and more peer-reviewed publications. Hence, players recognize that scientific evidence will be key to getting adoption by physicians. A report published by IQVIA a few years ago discussed the state of evidence of several digital health companies (p.31-32 more specifically). We may expect similar trends with AI although some of it will obviously remain proprietary. Then it comes back to the question on the previous discussions- what becomes the evidence or source of truth we should go by? Will new health care players adapt to our system of peer-reviewed publications and clinical guidelines or will we also have to adapt our thinking around these solutions?
I am most excited by autonomous because of its potential for increasing access, lowering cost, and improving quality - as per the quadruple aim of AMA.
There is currently one autonomous AI that was authorized by FDA after a preregistered clinical trial of its safety, efficacy and equity, and it is used in clinical practice around the US. NPR recently had exciting coverage of a go live of the AI system in New Orleans, LA, and NPR covered how patients, providers and even FDA experienced autonomous AI.
Transcript and coverage can be found on their website:
npr.org/sections/health-shots/...
There are many papers describing how efficient machine and deep learning algorithms can be in diagnosing certain conditions. The examples with the highest number of evidence-based papers are in radiology, pathology, oncology and ophthalmology.
Eric Topol recently published a good review of some of the established methods and applications in Nature Medicine: nature.com/articles/s41591-018...
That gives a good overview.
In summary, it seems that most papers use machine learning algorithms and usually data that can be obtained easily, can be annotated properly by medical professionals and can be generalized. The best examples therefore include X-ray and CT images, tissue slides and retina scans.
In addition to the other responses, one that is worth mentioning is something that Google published last year called "Scalable and accurate deep learning with electronic health records". Here is the link: nature.com/articles/s41746-018...
They came up with a novel approach to use a combination of Natural Language Processing (NLP) and deep learning techniques on EHR data and achieved 95% AUROC for inpatient mortality prediction, 86% for Length of Stay Prediction and 90% for Discharge diagnosis and 77% for 30-day readmission. While these results are great, this is a retrospective analysis using the historic data, and as far as we know, it is not used in clinical practice yet. If this is clinically adopted, it can open up opportunities to predict other clinically significant events that can improve the quality of care at a health system.
How do we address the various stages where bias is introduced into AI systems (ideation, design, development, validation, and deployment) and how do we specifically advance equity and fairness? Training data, validation data, model structure, conditions of deployment just a few areas where bias occur. What role can standards play?
Carrot approach: Understanding inclusivity means we target AI solutions to a all segments thus the largest possible market share - including all sexes, genders, races, ages, physical and emotional/mental abilities et al. Stick approach: we deliver on report cards (with consequences) we are measured on. Thus, specifically measuring on Equity and Inclusion, vs. accepting it as an understood goal should be a priority. Moving from the Quadruple Aim to Quintuple Aim (adding Equity and Inclusion); and asking how AI tools can ensure they meet ADA (American Disabilities Act) requirements - are two examples. Finally, the potential risk management/liability of discrimination is a real business concern.
Some forms of AI, like autonomous AI, are regulated by FDA, and with FDA, we developed the principles of safety, efficacy and equity, which are required during the design, development, validation, clinical trial and deployment stages.
Crucial here is common terms and nomenclature, that are well understood by all physicians. Rather than technical terms, transparency and accountability for patient safety are best served by well understood terminology. AMA and other stakeholders are hard at work to develop guidelines, labelling, taxonomy, and nomenclature that will increase transparency and accountability.
Yesterday, a New York Times Opinion authored by a Data Scientist, stated: “For a Longer, Healthier Life, Share Your Data,” adding: “Privacy protections are standing in the way of artificial-intelligence programs that could diagnose cancers and screen for genetic disorders.” Many in the Comments Section didn’t agree with the Opinion. In healthcare research, the Henrietta Lacks HeLa Cells story is well known and today we have 20th century ethical guidance for human subjects research. Patients give us permission to use their EHR data for providing care, for billing and for research in academic medical centers. When EHR data is given for free or sold/leased for monetizable AI, do we run the risk of a “Digital HeLa?” What guidance should physicians give their patients, when they request assistance in making this decision? More data = more diversity = less probability of a “weapon of math destruction.” But, if data is the AI fuel, should patients (or populations or public health agencies) be indirectly compensated for it?
This excellent question and comments get to the heart of the ethics of AI, or more specifically the ethics of big data.
Mittelstadt and Floridi in their paper, The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts (ncbi.nlm.nih.gov/pubmed/260024...) describe “Five key areas of concern are identified: (1) informed consent, (2) privacy (including anonymization and data protection), (3) ownership, (4) epistemology and objectivity, and (5) 'Big Data Divides' created between those who have or lack the necessary resources to analyze increasingly large datasets. They further describe other areas including the need to differentiate between “academic” and “commercial” practices and ownership of intellectual property.
Tools exist to help, such as data use agreements, institutional review board requirements and release of information forms. But these tools were generally created before data found use in the continuously learning systems emerging today. The same is true for HIPPA and the NYT piece referenced in the opening question states “the costs associated with sharing data for research purposes in a HIPAA-compliant way are beyond what many hospitals can justify…The fines associated with a potential data breach are also a deterrent.” The question of re-visiting HIPPA in the AI environment is a challenging one and there will be different opinions on both sides.
Which gets us back to the physician, who will be the primary communicator with the patients affected. Documents, regulations, policies, and forms help. But patients and families will look to another human to answer their deepest questions and concerns in terms they can understand. Physicians will be the ones to fill that role. Their having the knowledge and tools to speak with empathy, compassion and consideration will be paramount to ensuring patient data leads to the benefits we all seek.
I would contemplate this issue from a short and long term perspective.
In the short term, people may look at physicians to share data information but like other processes, that may bring a certain amount of "clinical variability", e.g. individual physicians less familiar with data practices within their organization. As pointed out by Dr. Silva, several tools antedate the big data revolution, making current systems challenged with some of the data dialogues going on elsewhere. In health care, we should have heightened responsibility towards data transparency particularly with the ethical implications related to health data.
For physicians to communicate organizational practices related to data, at a minimum, there should be:
1) application of best standards related to data governance;
2) clear institutional policies and rules related to data;
3) transparency towards patients, physicians and members of the organization with regards to uses and flows of data;
4) as the legal framework continues to evolve- making reference here to GDPR (eugdpr.org) and California’s Consumer Privacy Act (en.wikipedia.org/wiki/Californ...)- perhaps we should contemplate the ability of patients to opt in or opt out of the use of their data beyond pure EHR documentation. This becomes even more critical with the knowledge that de-identified data can be re-identified (jamanetwork.com/journals/jaman...). Some level of protection can obviously take place through data sharing and use agreements but concerns may remain.
This is a controversial and evolving area. Involving patients in the conversation related to the end-use of their data may be essential. In the long term, the complexities of creating rules and policies at the state, national and international level in a globalized world may also call for the need of individuals to control their own data via personal API (application programming interface).
Connect
It's incredibly exciting even just to choose the best of the use cases that are already in practice. Here are a few:
1) The algorithm spotting DNA mutations in tumors
As genome sequencing costs significantly dropped, the genetic analysis of tumors became possible, and recently, human experts with the support of computational tools started to analyze the data to figure out what kinds of genetic changes, or mutations, occur.
For making such existing tools more precise, Personal Genome Diagnostics in Baltimore developed a new method involving machine learning that automates the tumor DNA diagnostic process and improves the accuracy of identifying mutations in cancerous tissues. Bearing that result in mind, the doctor can choose the specific targeted treatment for the patient.
2) Heart attack predicting algorithm
Researchers at the University of Nottingham in the UK created a system that scanned patients’ routine medical data and predicted which of them would have heart attacks or stroke within 10 years. When compared to the standard method of prediction based on well-established risk factors such as high blood pressure, cholesterol, age, smoking, and diabetes, the A.I. system correctly predicted the fates of 355 more patients.
3) A.I. predicting death risk among inpatients
Researchers at Stanford University trained an A.I. system to increase the number of inpatients who receive end-of-life-care exactly when needed. The algorithm was trained to analyze diagnoses, prescriptions, demographics, and other factors within electronic health records during that 3 to 12 month period before a patient passed away. Once trained, the algorithm was able to flag patients in a hospital’s system that might be appropriate candidates for palliative care. When Stanford Hospital’s palliative care team assessed 50 randomly chosen patients that the algorithm had flagged as being at very high risk, the team found that all of them were appropriate to be referred. Beyond bei
Connect
Also, I just published this collection of use cases a few hours before you posted the question.
medicalfuturist.com/what-has-a...