Patient generated health data (PDHD) is the health data collected from a patient’s mobile apps and associated devices. As smartphones have become ubiquitous, health apps and wearable health devices have become increasingly popular. The new technologies provide the ability to continually measure health data – from heart rate measurements to data on the inhaler usage of asthma patients. This is in contrast to the current paradigm of discrete data collection from intermittent clinical visits. With the use of analytical tools, PGHD can be used to determine effectiveness of treatments, variables affecting treatment optimality, variables exacerbating patients’ chronic disease, and more. Further, PGHD measures can be used to monitor for measurements warranting immediate attention, alerting a patient’s healthcare team to provide potentially life-saving intervention. The benefits of incorporating and increasing the use of PGHD undoubtedly exist, but the switch toward this new paradigm poses non-trivial challenges and barriers. These include: 1) mobile phone and wearable device availability and compatibility, 2) secure integration of direct device measurements and patient reported health data with mobile health apps, 3) secure integration of PGHD from patient devices to electronic health systems, 4) incorporations of the new PGHD with current workflows and the associated disruptions to these workflows, and 5) patient adherence. Although non-trivial, each of these barriers can be overcome, and with significant improvements to patient health that PGHD integration promises, there is more than enough interest, need, and motivation to overcome each barrier. |
Thank you everyone for a fantastic discussion thus far. We have been discussing PGHD as framed by clinical workflows, types of data and usefulness. This is important as we work toward understanding the ways in which PGHD can be integrated into care, but how can we assure the data that are received is meaningful? How can we separate meaningful PGHD from error and mismeasurements?
With the growing collection of devices available, used in both clinical and health situations, I am wondering what data would be most important to collect? It seems like Withings was on the right track with systemic arterial stiffness and we certainly need to continuously monitor blood pressure; but what else would become best practice to collect?
That is a good question! Do we just focus on what is opportunistically available from things like activity trackers or do we take a more systematic and risk-based approach to address things like hypertension as you note. My sense is that this decision should be guided by a burden-of-suffering approach where we focus on highly prevalent conditions like HBP, diabetes, mild-to-moderate mood disorders and then find ways to improve how we then capture and use PGHD to manage them.
Agree with Kevin Patrick. Given the subject is non-emergent monitoring (i.e., chronic disease monitoring of a non-emergent nature), then a risk-based, burden-of-suffering approach might make sense.
Blood pressure is an important metric as well as glucose in the diabetic to identify possible sources and causes of other conditions, including altered mental status. Patient weight changes are important to understand, especially in patients with congestive heart failure.
Yet, the burden of obtaining measurements may be excessive unless there is a health coach or home care aid to assist.
Plus, monitoring is not limited to objective measurements, such as blood pressure. Mood changes, pain severity, quality and provocation, and reports of lethargy can be as important as early indicators of onset.
Its a good time for medical device interoperability standards to be defined so we have the data to be collected mapped out for each device
Data standards for medical device interoperability have been defined and are available... have been available for years. The IHE PCD profiles based on HL7 are a good example of those which have been maintained and use-case testing has been conducted over many years through yearly Connectathons, also run through IHE as well as HIMSS. The problem is not that standards do not exist. The problem is with adoption and motivation to employ the existing standards by some medical device manufacturers, plus the necessary education of both health care systems and vendors in these standards.
Aside from the source identified above, non-governmental organizations such as AAMI and ECRI proffer these standards and best practices.
Information modeling to harmonize on the semantics presents a different problem to interoperability and is one that still exists, although IHE has made a bit of progress there, too.
Medical device parameter requirements vary by use case and, on occasion, by patient. the mechanism for retrieving these data can also vary by vendor and vendor device. Yet, the outbound data from these devices, once processed, almost uniformly conforms to a flavor of the HL7 standard, and certain electronic health record vendors are requiring adherence to the aforementioned IHE PCD profiles for medical device communication.
It is true data exchange standards have been available, The old adage has been if you've seen one HL7 message, you've seen one HL7 message. Even with MU, MIPS, MACRA, ELR and HAI, reporting requirements, there is still variability in messaging, coding, etc. whereby we don't have interoperability, much less semantic interoperability yet. It's not the device manufacturer's fault in a number of cases, but the EHR, LIS and other information systems where the data is interfaced to and received. Many devices don't have capability for standardized coding so data can be computer processable.
Been working on the SHIELD initiative with FDA, ONC, CDC, NLM, CMS to address these laboratory data interoperability issues. The Laboratory In Vitro Diagnostics (LIVD) standard, which has been recently developed in HL7 FHIR, is one of the resources to address standardized encoding of laboratory results generated by IVD device vendors with LOINC.
Concur on the information modeling challenges. Once of the largest issues I've seen (also contributing to clinician burden, which is another issue), is most major EHRs have "generic" names for laboratory testing, and clinicians rarely or even never depending on their build see the actual laboratory's name for a test order or result. The builds and mappings are manual and unable to handle laboratory compendium updates dynamically. If challenged with "routine" testing and workflows, how will EHRs be able to correctly accommodate PG lab data without leading to patient safety issues? Design needs to be optimized for clinical decision making and not burden providers any more.
The reasons you cite are why the IHE PCD profiles were generated: to promote a tight and well-defined mechanism for medical device data communication.
There are enough standards, and well-defined, too. The real issue when it comes to medical devices is (still) the fact that (1) many medical devices do not conform to ANY standards in terms of data communication; and (2) some medical device manufacturers do not liberate or otherwise make for easy access to data from repositories they maintain as part of their medical device offerings.
The first fact means that medical device intermediaries are required to transform data from the proprietary communications media (logical, physical, semantic) to a "standard" - let's say the IHE PCD transactions. This translates into a higher cost burden to the health system and a more complicated "food chain" for realizing the data.
The second fact means that there are limitations in terms of frequency and type of data that can be collected from certain medical devices without paying additional fees.
I think that the distinctions we've discussed already--between "monitoring" data collected for use in research repositories and individual data used for direct patient care--need to be retained. Clinicians and patients together might opt to integrate a given monitoring technology into care, at which point an allied health professional (health coach) might assist with implementation rather than expecting the clinician to supervise or expecting the patient to be self-sufficient in learning to use the technology and incorporating it into their daily routine. In terms of expanding participation in large-scale data collection efforts by patients and clinicians, I'm not quite sure how to address that, except to say that automation will be key.
Aligned with our earlier discussions, the value of PGHD will likely be maximized if the insights gained from it are then factored into a "need to know/need to use" basis for each member of the healthcare team. Physicians might base some things that they do in a face-to-face visit on some of it, but we have been talking here a lot about health coaches/promotoras and others who could also leverage it to help patients and their families.
One important thing, however, is the role that busy clinicians can play in endorsing these kinds of efforts that other members of the team provide. Evidence from the fields of tobacco cessation and physical activity promotion indicates that a very brief "dose" of counseling on the part of an MD can make a real difference on uptake of improved behaviors.
Using AI to separate the normal from abnormal values/data would be very helpful to minimize the burden to the clinicians. For example, if we are able to develop an algorithm that has a vey high sensitivity and specificity, such that abnormal values will not be missed, while false positives will be minimized, clinicians will only have to review the abnormal data. This approach will maximize the efficiency of the system.
Patrick makes excellent points. We have, in fact, been focused on the development of web-based tools that facilitate the integration of primary care practices with behavioral health providers. It takes advantage of CMS's new Behavioral Health Integration (BHI) initiative that (a) incorporates Health Risk Assessments done around Annual Wellness Visits, which could be completed on the Web before the seeing the physician; (b) describes a process by which referrals to behavioral health practitioners are made; and (c) makes payments to the primary care practice for managing this process. In addition, the BHI model is likely to improve overall outcomes by addressing the psychological (emotional, behavioral, and cognitive) and SDH issues that interfere with patient engagement and activation.
A challenge remains, though, in effectively separating true signals from false signals in the patient who is not observed. This remains a challenge for both the inpatient and ambulatory spaces. Artifact is such a large driver, particularly with conscious patients. Even telemetry units experience large amounts of non-actionable alarms. I recall one recent example with over 1000 crisis alarms in a 30-day window on a unit having 18 beds. All but 5 of those were false.
Need to combine multiple sources of information together, including observations, to help reduce false alarms as the sensitivity and specificity without are simply not there. Too few discriminators without more context.
Furthermore, in patients receiving supplemental oxygen via NC or PRB or CPAP and for whom respiration and etCO2 are monitored, the potential for artifact is enormous as even minor movements and adjustments in tubing can result in noise or signals that can mimic actual patient behavior but are not clinically actionable.
Hi Stephen, Your comment made me think about various approaches to supporting self-management for patients with chronic conditions--like the Stanford peer group approach and the Flinders model. I don't really have a question, but it seems that some of these approaches may be made more accessible through technology, particularly if the tools (like the Flinders scales) can be incorporated into a patient's EHR and monitored by various members of the health care team.
Here's a link to an article that reviews a range of self-management models, concluding that multiple approaches are complementary, should be guided by patient preference, and require the support of service structures such as the BHI you describe, I assume.
Agree the idea is a good one, but it is highly dependent in how EHRs and rules are built. I've seen EHRs with different test results (different methods, different specimens with different reference ranges and LOINCs, etc.) all listed as the same test result in the EHR even though they are described and coded (LOINC) from the performing laboratory in the messages sent to the EHR. This can be a patient safety issue too depending on how the results are utilized in the EHR.
That said, if results are set up separately in the first place, then they should have separate reference ranges, etc. Reference ranges are required to be sent via CLIA law by the performing laboratory in the US as part of the data elements requires in lab results in the lab report of record.
The question becomes if that is all occurring, what's happening on the EHR to cause the issues where AI is needed as suggested? Also, if folks have normal results, but they have a change from their overall "norm" for them as an individual patient (a N of 1), how would the approach alert physicians in changes in lab results that may be indicative of a pending problem? (Delta checks as we call them in the laboratory.) Examples may include a falling Potassium, a rise in creatinine, a drop in hemoglobin indicating a bleed, but they have not yet reached highs, lows, or critical values.
Concur, balance is needed to avoid provider alert fatigue.
Thanks for the link, Elissa. In addition to helping determine whether a patient is likely to benefit from behavioral health intervention (psychotherapy and health coaching), we want to assist the practitioners in (a) using PGHD to understand--as quickly and efficiently as possible--the core psychological and related factors that influence patients' ability and willingness to self-manage their health and (b) promoting patient engagement with their primary care providers, so the patient will take advantage of their knowledge and guidance for better self-care. I contend that the best way to accomplish this is via BHI.
We in informatics try to follow principles that were outlined by Cimino and others decades ago to help improve data quality and simultaneously enable synthesis by the computer rather than the clinician. Unfortunately, these principles rely on robust knowledge representation within electronic platforms - something that has to be planned a priori and is difficult after the fact. These principles apply to information from patients just as they apply to clinicians and all who need to interact with such data. The key is finding vocabulary that is structured and standardized that everyone can understand. In our research, patients have been happy with the terminology we developed based on the Omaha System, a rigorous standard used in community care and other settings. See the example in the web-based mobile enhanced app MyStrengthsMyHealth.com (TM UMN). As you complete the assessment (choose whatever applies) and submit, you'll received a synthesized report that organizes the information for you. Suppose you would like to add heart rates to such a system. You can see how that measure would fit with the Circulation concept, and could be shown in a 'measures' column. We're working on developing such functionality. I offer this as an example of integrating patient-generated health data from an ontological perspective, because we have good tools based on decades of research that can help solve today's pressing questions.
Thank you Andrea for your comment. I think the approach needs to be individualized depending on the variable we are studying. For some variables, delta change may be more informative than the absolute value. Obviously, this is an idea that needs to be developed and optimized through an iterative process.
What a helpful and, for me, exciting response! I am a communication scholar, so the idea of building an "information bridge" of sorts between patients and clinicians by starting from a shared language/taxonomy makes perfect sense. I have only the vaguest notion of what this looks like from the technology side, but I recognize the tremendous overlap between the strategy you are describing and the dialogic qualities of relationship-centered care in the practitioner-patient encounter.
It will depend on the type of data too used for clinical care. For example, would you expect patients to provide enough information for encoding pharmacy or laboratory data with their respective standard code systems. With patient self reported herbs, OTC medications, does your system probe the patient for enough information to encode the data accordingly?
Agree that standardized code system/ontology approach is best and at the point of origin for said data as opposed to someone downstream trying to encode and they may have insufficient information to do so.
Thanks for your enthusiasm, Eileen - I agree. It would be great if you could log in to our MSMH app (url mystrengthsmyhealth.com) and explore. We have found that our consumers (aka patients) want to be able to share this information with their practitioners, and we have also found that practitioners can understand the consumer facing language. This to me is the fundamental question - can we all speak the same language?
Great themes, here and I'm especially interested/involved in the laguage aspects from the clinical/patient equivalence side of the equation. Q6FSA has developed a universal faceted classification protocol that is technology agnostic, language neutral and domain independent. What that means to me. as a layman, is I can confidently use a term like 'elbow' and point to something physical that a radiologist might refer to as the 'olecranon process'. Because the approach is based on patterns of language - and what we want to do with them - it has the ability to translate inside and outside disciplines to everyone's benefit.
Another area that may benefit from some disambiguation is the reference to pain severity. This carries both a subjective and objective index, with different people reflecting personally on how they experience pain and its intensity relative to their prior experience and what they can tolerate.
It is extremely important to identify pain onset, provocation / alleviation, quality, radiation, severity, and duration as a clinical sign. Alleviation or provocation of pain relative to a prior assessment also indicates changes that are important to monitor. The assessment on a 1-10 scale for adults, or the faces scale for children, has been used effectively but is not a universal measure of pain across patients.
PGHD that would inform clinical decision would be most valuable and this approach may lead to a more personalized therapy. For example, adjusting antiarrhythmic medication dose by monitoring patients' smartphone-based ECG, or adjusting insulin dose by monitoring PG-glucose levels, or even adjusting diuretics by monitoring some ECG or other biomarkers.
I am giving to a lecture to 1st year medical students next week. What articles using activity trackers and health/wellness do you suggest I highlight?
Adam,
One of the gold standards for the potential of PGHD is some of the work that Propeller Health has done with the city of Louisville. This is a bit more in depth than activity trackers, but I think the way that they proceeded to setup this study is the gold standard for how digital therapeutics can shape both individual patient interventions and local public policy: healthaffairs.org/doi/abs/10.1...
Here's a journal article on PGHD and Cancer Care: ascopubs.org/doi/full/10.1200/...
Thank you, greatly appreciated
Adam - Here are a couple that you might use. (Just screen grabs of title, journal, etc.; the papers can easily be found via googling.) The Bravata et al one is over 10 years old, but has been cited almost 2000 times and is an excellent early review of the utility of simple pedometers. The more recent one is a small but representative study that we were involved in looking at the validity of Fitbits for measuring activity and sleep in adolescents. (We have a larger study in children that is being submitted as I write this. Same overall strong results.)
Super helpful, thank you!
You're welcome, Adam. It occurs to me that given your area of work you might be interested in this June, 2018 ASCO abstract:
meetinglibrary.asco.org/record...
... It presents some of the results of a project we are working on with colleagues at MD Anderson Cancer Center on using home-monitored weight, BP and PROs to help assess symptoms in cancer Rx.
Hi Adam, I don't have an article for you, but as an icebreaker for your lecture, you may want to ask your group (think-pair-share) what activity trackers, apps, and e-health platforms they use for themselves, and what they perceive to be the pros and cons of the technology. The mental activity may help them to consider both their perspective as future clinicians and the patient perspective as partners in PGHD.
Good idea, as part of the course we give them Fitbit Charge3
Adam
Welcome, and thank you for joining IHMI's discussion, "Monitoring Together: Integrating Patient-Generated Health Data." Here we will be discussing the ways in which patient-generated health data can be incorporated into health care, and what challenges and barriers could be encountered as we move toward a new paradigm. If we could begin with the panel providing the group with some background and experience with patient-generated health data.
With the recent move to more integrated health data being gathered by companies like Nike, Apple, Withings et al. we are able to gather a more complete picture of an individual than ever before. The central issue is the algorithms used and the loss of accuracy for the gain of a more complete picture. The move to more ubiquitous technology is already here with routers collecting heart rate data on individuals that enter rooms. This technology will help physicians gather more data than ever and provide great depth of analytics to mine to better integrate preventive and treatment related practices.
We've been using wearables and data entered passively and actively via mobile phones for many years in our research. Data captured this way can provide unprecedented understanding of important health behaviors, patterns of sleep and mood, and - in the course of medical treatment - early understanding of symptoms that might compromise a full course of treatment. Importantly, there is growing acceptance of this on the part of both patients and their caregivers.
I've noticed a conundrum in the world of mobile health apps, big data, and clinical decision-making that, in my discipline of health communication, mirrors what is called the "knowledge gap hypothesis." In media theory, the knowledge gap hypothesis posits that those of higher socioeconomic status with access to more and higher-quality information tend to also be targeted with more and higher-quality information. Similarly, in the health data world, although there is high and growing market demand and adoption of health-related apps, they tend to be targeted at and consumed by those who are already relatively healthy, health-conscious, and well-resourced.
When we envision patient engagement with patient-generated health data through mobile or wearable devices, the greatest potential lies with engaging patients who need support in managing chronic conditions, many of whom are already stretched thin in terms of coordinating their own care and competing life stressors (many patients with chronic illness compare it to having a part-time or even full-time job). Passively collected data adds little burden to the patient, whereas data that depends on patients to actively monitor and record symptoms may present an intractable obstacle to participation, unless there is immediate and tangible benefit provided.
I would be interested to learn from others in this discussion whether there is a role for personalized patient coaching in assisting patients with the adoption of data collection technologies--similar to the diabetes educator/promotora. Also, I wonder how these technologies may be designed to meet immediate needs of patients as well as the clinical data needs of healthcare practitioners.
Excellent questions. Personalized coaching is possible if the patient is willing to share relevant information on their condition and social determinants. Some Digital Diabetes Prevention programs have done this. Also, when there is passive data collection, such as, through a smart phone or wearable, the only requirement of the patient is to put on the device or carry their phone. Ideally, the patient would also be encouraged to engage with their personal health dashboard so that they understand their own data and what it means.
Smartphone based ECG rhythm monitoring is definitely a valuable tool for diagnosing arrhythmias and can certainly guide management. We have used the Kardia mobile ECG app both clinically for monitoring of arrhythmias (this approach is ideal for infequent symptoms, which cannot be captured on a 14- or 30-day event monitor) and in research protocols to guide anticoagulation in patients with low AF burden and low risk for stroke. It has been very well accepted from the patients. The main issue for patient generated (ECG) data is the magnitude of this potentially limitless source of data. Developing strategies to appropriately manage this wealth of information will be very helpful in order for this model to be successfully adopted in clinical practice.
I agree with the vision behind the use of patient-generated health data. Having been working with integrated inpatient data for well over 20 years, written three books on the subject relative to practical experience, and working in the field from an EMS perspective, I see the potential (am also a TII diabetic, so perform my own at-home monitoring).
My main concern is with compliance & artifact generation associated with unvalidated data. There are ways to address these issues to some degree. But, outside of controlled settings it can be a challenge sifting the "wheat from the chaff" of signal versus noise. Secondarily, patient context and observations for the at-home patient are difficult to discern at time of measurements (perhaps a concept involving video is an appropriate adjunct?)
But, at least from what I have seen observationally of patients in the home is the issue with compliance and measurement integrity. Would like to see more discussed on that topic. Perhaps not appropriate for this forum (???), but remains a real issue.
You raise some very important issues, Elissa. The needs of users of these devices and apps need to be front and center in their design. We can't count on the manufactures to do all of the work necessary to ensure that this happens. Moreover, because of the rapidly changing landscape in which these might be used, ongoing monitoring of patient/user experiences will be critical. One of our current projects is focusing on how to do this with low-income individuals in Eastern Kentucky and we are equally interested in how our methods are patient/community-centered as we are in how clinical outcomes can be improved.
As to your question about new roles for health coaches/promotoras I'm absolutely convinced that as evidence grows about the value of these new technologies we will also likely see that indeed, there is a new set of skills that we need our allied health providers to possess: similar to what we expect when we visit the Genius Bar at the Apple store!
It's nice to meet everyone. I'll be moderating our conversations around PGHD this week.
In my role leading up our Data Integration business at Datica, I've worked with hundreds of vendors and healthcare organizations on integrating applications and devices with EHRs and clinician workflows. I've been working in the space for a long-time, beginning with installing EHRs at Epic alongside MyChart (which could push PGHD into the EHR through questionnaires).
I hope to guide conversation around balancing the patient desires for tracking data and the clinician responsibilities around workflow, liability and productivity of the data. Like the proverbial tree in the woods, PGHD is not useful unless it's specifically setup to be actionable by clinicians in the field.
Looking forward to discussing this topic with everyone this week!
Hi Kevin, so nice to learn about your work with low-income patients and supportive health technologies. Your patient and community-centered approach is essential to ensuring that technological advancements benefit those most in need. And, relatedly, that the associated patient data reflects the particular qualities/challenges/needs of this population.
I foresee the emergence of a range of new health professions designed to facilitate the complex work of care coordination, wellness coaching, and community engagement. I wholeheartedly support the current shift in focus towards population health and continuous-comprehensive rather than episodic-individual care; however, I also see physicians being asked to oversee and integrate more and more fields of expertise (data analysis, quality improvement, population health, health promotion, behavioral interventions) beyond their home discipline of medicine. Although some physicians are drawn to these related activities and expand their scope of practice into areas like public health because that is their passion, I worry about the full-time, working, primary care doctor who is being expected to have a more and more diverse set of skills, some of which might be more efficiently and effectively handled by a "non-physician" on the team.
Hi John, Although my field of expertise is in interpersonal clinical interaction, as a researcher I wholeheartedly agree with your concern regarding the integrity of data, particularly if it is going to be used as a foundation for clinical decision-making and models for disease management. Rather than a compliance framework, however, I tend to view patient behavior through a lens of engagement and activation. Just as there are many reasons for patients not "complying" with treatment plans, I'm sure there are many reasons for not engaging in useful ways with monitoring technology.
Perhaps it is useful to think about drawing distinctions between passively collected patient data that might be assumed to be more valid and reliable in terms of clinical use, and patient-entered data that requires more support in terms of enlisting them and ensuring consistent participation in recording data. I have a corollary question: as these technologies are developed to collect clinically-relevant data, is a distinction made between data that goes into a patient's EHR and data that will be included in a larger repository for research purposes? Or is most of this technology assumed to generate data that is used for both individual patient care and population-based research queries?
The challenge of compliance and measurement integrity is a large one. It's a challenge even within the walls of a hospital, where an endocrinologist might not trust or want to use the vitals measured outside their department due to perceived inaccuracies.
I thought that Epic's approach to this in theory was always relatively pragmatic where there were three tiers of data and ingestion patterns:
1) Approved devices: Approved devices (typically prescribed from the EHR) and interfaces from other known gateways are sent directly into the EHR. These are filed directly into the EHR alongside other similar vials. We've directly inserted data from approved blood glucose monitors and smart asthma inhalers into EHRs this way
2) Trusted, but lower fidelity sources: Kicks off a workflow in Epic where a clinician is asked to review the data before insertion. Takes time and clicks, but if the alternative is receiving bad data, perhaps worth the time. A good use for ML/AI in the future to check bad outliers.
3) Not trusted/accounted for: Doesn't go in the EHR or is filed as some kind of note which can be reviewed and transcribed later. This is bad if you want to promote untrusted data to valid data forms.
This, of course, doesn't account for the "Dog wearing a fitbit" of lore (which is why aligned incentives are also important for PGHD utilization).
What a helpful reply, Mark! Thanks so much.
Follow-up question, or two. How does a device get to the level of "Approved"? And can you envision any scenario where a wearable device could collect patient data outside a clinical environment and be "Approved"? Or is it only likely that these kinds of devices would be "Trusted" and still need verification before being added to the EHR for research purposes?
Elissa & Mark -
Validation of patient data is indeed necessary for trust, in my opinion. Validation may not be possible outside of the hospital as readily as one might expect inside the walls of the hospital. Patient engagement is also key, in my opinion, as motivated patients are more likely to take the time and make the effort to be diligent, particularly with instruction on use of monitoring equipment and techniques.
Reaching out to the patient is an important adjunct to ensuring “good data”. Some communities are rolling out special programs such as Community Paramedics which have as a goal visiting patients on a regular basis and assisting in essentials, such as monitoring, verifying patients are taking medications, answering questions, and lending an ear to the lonely and shut-in.
Many of the patients I see in the home have myriad issues that span far and wide beyond home monitoring, and compliance is the last thing on their minds. Targeted assistance and action is needed for such individuals.
But, problems are not limited to the home: assisted living and skilled nursing facilities have their share of these issues, too. Oftentimes the integrity of the information comes down to the individual nurse taking care of the patient and how skilled, motivated and caring they are. In such environments, good nursing can make all the difference in the world.
Yes, I like the categories that Mark outlines as well. But we need to be careful not to "over-medicalize" some of these data and the devices that capture them. Clearly those that are in the critical path of diagnosis and treatment of established diseases need to reach the top tier. For example continuous glucose monitoring. But other devices, even though they may not be "clinical grade" or "research grade", that measure important behaviors "in the wild" might provide, overall, a much clearer picture of what is happening in the lives of our patients than current self-report measures.
I like the notion of new data approaches like machine learning and/or artificial intelligence helping with this. But I also think we are seeing consumer wearables getting better and better in terms of their validity and related constructs. If this continues, the standard might not be related to whether they produce "in-EMR" vs. "out of-EMR" data. Rather, can we find a way to ensure that the *meaning* of these data are accessible and useful to people, patients and busy clinicians?
Well-stated, John! As anyone who has experienced a severe illness or cared for a loved one who has one, the "last mile" for these devices can sometimes depend upon whether or not they support the human sensitivity and touch that is a hallmark of successful medicine and health enhancement. The good news here is that there is a whole field of computer science and engineering that is trying to advance these issues of human-computer interaction. While we are certainly in the early stages of this, those of us who can remember punch cards, floppy discs, waiting in line to use time-shared computers and the like know that we have made a lot of progress since then!
Most of this is individually up to organizations. You're seeing groups, like the AMA with the IHMI, trying to standardize this. We're still in the early stages on this industry wise though.
At least in the United States, the billing requirements for remote patient monitoring involve getting the patient's consent (and documenting it) before beginning treatment. So, even if you ingested historical data, the order of operations involves getting approval first.
Dr. Kevin -
I recall being alerted to an assisted living facility in which the staff reported a patient who was "hypoxic with shortness of breath, and an oxygen saturation of 70%".
Upon arrival, I observed the elderly resident sitting in a chair and alert. Upon inquiring as to how the resident was feeling, the response was "fine". Upon examination, I noted no shortness of breath, no indication of cyanosis. The patient did have extremely cold hands and slow capillary refill.
Upon warming the patient's hands with a blanket and checking oxygen saturation, > 94% was observed on room air.
This is but one example of many anecdotes, yet an important observation relative to machine measurements. I have been working with and using various physiologic monitoring, mechanical ventilation, anesthesia, glucometry, spirometry and other devices for decades now. But, I always verify measurements used in clinical treatment via observation, palpation, and make ample use of my trusty stethoscope.
Machinery lack contextual understanding. This is an essential element when asserting whether measurements are valid or are simply artifact-laden noise. This is true regardless of which information model is selected or proffered.
Potential validation capabilities exist through the integration of passively collected objective data from devices with subjective information collected manually via questionnaires. Examples include measurements of stress, mood, pain, and behaviors whereby biometric correlates of one's emotions, physical symptoms, and actions could be used to verify or invalidate the reported data. Enabling these capabilities ought not be overlooked. ncbi.nlm.nih.gov/pmc/articles/...
projects.ict.usc.edu/nld/cs599...
Hi John,
This is likely not going to advance the conversation about PGHD too much, but your narrative about the nursing home patient reminded me of ongoing conversations in medical education about the over-reliance on tests and technology over physical exam skills. In this case, it also seems that just talking with the patient would have precluded the assumed need for an EMS visit.
I also see your example pointing to why teamwork will be essential in integrating more technology and sources of data into patient care. I am a firm believer in interprofessional health education and teamwork as a foundation for quality care, with the caveat that team communication must include engagement and shared responsibility. Unfortunately, a perennial danger of teams (particularly teams whose members do not communicate directly and regularly) is diffusion of responsibility. It becomes too easy to pass responsibility to other members of the team or other parts of the system. In this case, it seemed it was easier (and sadly far more expensive) to pass responsibility for investigating the patient's apparent hypoxia to emergency services.
If, as members of this forum have suggested, patient data technology coaches are to become effective members of a health care team, their participation on the team needs to be meaningful and respected. They cannot be treated as functionaries or, I believe, outsourced.
Hi Kevin, You comment made me recall this popular press article about the use of avatars to care for the elderly at home. My students had somewhat mixed feelings about it, but they all agreed that 3D, interactive technology is a way forward that preserves something of the human touch as well as offering choice to folks who desire to stay in their homes.
wired.com/story/digital-puppy-...
I also came across this article in PubMed: ncbi.nlm.nih.gov/pmc/articles/...
Hello, Elissa -
Exactly. I was "911" in the aforementioned situation, so it was my responsibility to assess and determine treatment. The scenario described, unfortunately, is not an isolated instance.
This is what I meant in my first communique, in which I wrote:
"Oftentimes the integrity of the information comes down to the individual nurse taking care of the patient and how skilled, motivated and caring they are. In such environments, good nursing can make all the difference in the world."
It really depends on individual "heroism" to an alarming degree. Of course, there are processes and protocols in place that all are duty-bound to follow. Yet, the wiggle room inside of these protocols can permit one to follow the letter of the law and still claim compliance with it while doing the bare minimum for the patient. Then, of course, there are those true "heroes" who are real advocates for their patients.
I fear that my contribution to this thread may have caused us to "leave the tracks" on the subject discussion. But, I see interactions among many aspects of healthcare that are all inexorably linked together, even though on the surface they seem to be rather disparate.
Thanks for engaging in this discussion.
Joining a bit late to the thread. Have folks reviewed ONC"s website with reports and resources on PGHD and the government's plans for the future? healthit.gov/topic/scientific-...
My interest is from a laboratory perspective, of PG Laboratory data. There is a trust issue as folks have noted. How do you know the patient hasn't dropped the home pregnancy test in the toilet, invalidating the results? How do you know the patient has followed the instructions from the manufacturer? Even "outside" lab results reported by patients may or may not be trusted by physicians. How does that impact their clinical decision making when they "trust" these results outside their helath system?
Another aspect is PG Lab Results should not be comingled with a facility's contracted lab results as they will likely have different sensistivities, specificities, reference ranges (even qualitative), methods, etc. They should be distinctly different to avoid any confusion for clinicians. How are folks handling in your EHRs, LISs, and facilities?
Thanks for those who have shared about the options available in your EHRs, or policies within your facilities. Suspect there will be differences amongst clinicians, facilities, IT teams with regard to policy as well.
And thank *you* for engaging, John. I am also prone to seeing connections across aspects of healthcare that are typically considered as distinct and unrelated phenomena. I cannot help but see the inter-relatedness of changes within the system--with both positive potential and negative consequences. For example, I understand the need to assign more routine tasks to personnel with a lower level of education/certification in order to reduce costs; however, when we don't support, respect, and incorporate those members of the team in meaningful ways, we lose valuable engagement and can put patients at risk. As you point out, we shouldn't be relying on "heroism" for health personnel to be doing the right thing for patients; it should be an ingrained part of the culture in every healthcare organization--and recognized and valued as such.
This is why I agree with you that, as technology plays a bigger role in clinical care, and as we expect patients to generate and share more of their personal health data, the allied professionals who might assist in implementing and supporting those processes need to be real and valued members of a collective.
Pending
Well, my first reaction to this question is that we would treat these data as we do any other data we receive about our patients. Always with an eye on whether it fits with the other things we know about the patient. Errors and mismeasurement are going to happen and we always need to expect that. As I mentioned in an earlier post, the validity and reliability of many of these consumer-facing technologies is getting better and better, but as with all things, a healthy skepticism always comes in handy!
Importantly, the meaning from a lot of these data comes as much from trends over time as it does from any single- or short-term measurement. So this also helps with handling outliers.
Finally, I think one of our roles here is to help our patients understand these data - for better or worse. To the extent that we can promote and enhance health literacy and numeracy re: these data, we may be able to strengthen their abilities and self-efficacy with respect to managing their health and well being.
Pending
Good question. How would a clinician know whether results are aberrant or not? It depends on the type of PGHD collected. If the data are subjective, such as how a patient feels today, it is dependent on the patient and the clinician would need to assess similar to an in person visit.
For quantitative results are they digitally acquired such as from a scale for weight or a glucose meter or manually acquired and transcribed? For digital results, is there a way vendors can allow patients to upload data online either directly or indirectly linked to the EHR? It would be similar to how point of care testing (POCT)results may be acquired from devices in some institutions.
However, if quantitative results are transcribed manually, what assurance is there that results are transcribed correctly? Perhaps a screen shot showing the results an be uploaded with the data and a quality check? For manually recorded results, such as number of times X event happended in a time period, clinicians will need to trust patients similar to in person visits.
What about qualitative results like home pregnancy testing? Capturing the results from the device with a screen shot would allow a clinician to see what the patient sees and confirm interpretations are correct and "catch" any errors. However, it's still up to the patient to follow instructions and perform any testing correctly to provide quality results.
The sensitivity and specificity of patient performed testing may also vary from traditional in lab testing. This information needs to be available to the clinician as it may impact clinical decision making.
Curious as to which results physicians find least trustworthy and they would confirm currently with additional testing? It may be that certain results are more prone to error and the medical-legal liability/trust factor would be too great necessitating confirmation by more trustworthy sources such as at their facility.
Pending
For example, how many clinicians trust, patient reported data such as home pregnancy testing and missed periods to identify pregnancy? Is it practice to confirm patient reported pregnancy always? Why aren't patient reported results trusted? May need to classify PGHD by a risk assessment of how likely an incorrect result can cause patient harm if a clinician acts upon it.
Connect
One of my biggest concerns when I am seeing a patient is the purpose of the data. Is it to screen or diagnose. If it is to screen similar to the home pregnancy, then I will need confirmatory studies backed by highly reliable and validated methods. Yet that screening test is pretty darn good and should not be taken with a grain of salt.
Similarly, with patient generated data we need to be assured that when deployed in clinical workflow that patient completed surveys or data collected from patient devices in the public domain do meet some minimum criteria for validity on the population they are studying.
Lastly, who is at the heart of collecting the data? The physician in clinic capturing data using a validated system or the patient at home with a medical device they have an personal interest in. I would argue both are good and both can yield meaningful results.
Pending
Spot on to both Kevin and Jason.
1) Test of consistency and rationality: do the data agree with what I know about the patient? For instance, if a pulse oximeter reads 70% but I see a patient who is alert, not cyanotic, not hyperventilating or presenting with agonal respiration, then I would bet my measurement is inaccurate. Message: corroborating signs and symptoms.
2) Test of continuity: what is the longer-term picture for this patient? Have I seen a continuous trend or is the observation or finding sudden onset? For example, have I seen signs of compensated shock evolving into hypoperfusion? Have I seen a sudden increase or decrease in a measurement that is correlated with an observed behavior? For example, is altered mental status coupled with hypoglycemia? Do I see an increase in heart rate and diaphoresis preceding altered mental status? Message: have I seen this coming or is it spontaneous and sudden, and are there signs to support that finding?
The list can go on...