It’s been 10 years since the passing of the Health Information Technology for Economic and Clinical Health (HITECH) Act, which led to near universal adoption and meaningful use of electronic health records (EHRs). While EHRs have clear benefits, there are still usability and safety challenges that need to be addressed. Join the conversation to learn more about this issue, share the usability challenges you’ve experienced, and discuss how we can work together to advance this technology to it’s potential. To learn more, visit https://ehrseewhatwemean.org/!
Engaging end users early in the software development life cycle is essential. Note that I say "end users" and not specifically physicians. Ordering providers - MD, DO, NP, PA - should be consulted during requirements gathering and during the iterative design process including concept testing. But there are many other roles that also are stakeholders, whether the EHR is ambulatory, hospital, other, or all. That includes nurses, therapists, nursing assistants, medical assistance, front desk, schedulers, and billing staff.
I would not expect our teams to consult with a physician to design a hospital nurse intake assessment solution, no more than I'd have a nurse advise on a pharmacy formulary management design without including a pharmacist.
Clearly, the users involve a broad polyphony of input for an EHR development process. Assuming that the EHR should not only be a repository of information use but also an integrated matrix of algorithms to guide the subsequent therapeutic planning process, especially as it would contribute to an over-all care plan. To understand the initial diagnostic process, I offer a discussion of this process at the following URL:
nationalhealthusa.net/humanita...
The over-riding problem, at least for Epic and Cerner, is the lacking of any recognition for how an EHR connects in a positive way with the underlying decision rules use during the health care process. And finally, testing a given edition of an EHR's evolution should be tested by a user group rigorously in terms of efficiency and effectiveness. The industry would do well to arrive at a consensus for standardizing user group testing processes.
How can vendors decrease alert fatigue?
Only high level alerts should be apparent, unless a clinician chooses to have less severe alerts or notifications made available. The negative effect on seeing too many alerts is that truly significant alerts are ignored, and not acted upon. This happens frequently with drug interactions, for example, with medication refills for drugs that patients have been taking in combination for an extended time. Also this can happen if a short-term med is added to chronic meds and there is a potential low level interaction. Since the short-term med will be discontinued soon, the true risk may be much lower than what the system tries to alert the clinician. Depending on type of practice and volume of services provided, individual physicians might want to be able to set the alerts at different degrees of notification. Truly high risk alerts might contain a hard stop, where the physician must confirm they are aware of the potential effect.
Alert fatigue is the desensitization that occurs when clinicians are subjected to a high volume of warnings in the clinical setting. They hear so many insignificant alerts - a patient brushing their teeth inadvertently triggers their cardiac monitor to sound the alarm for ventricular fibrillation. Ultimately the clinician is so desensitized that the next time they hear that warning, they walk instead of running - and now the patient really is in VF. Think of the cacophony of sounds in a hospital room: infusion pumps, vital sign monitors, ventilators.
In clinical documentation systems, it can be a stack of drug-drug, drug-allergy, drug-condition, dosage range violation, or other flagged warnings that can fire during the medication ordering process.
It makes no sense to me that a provider seeing an elderly patient has to see "use precaution for patients 65 years of age and older" for every single medication.
This is one of my all-time favorite topics probably because it's a fascinating but difficult problem to solve. In my role as a patient safety specialist at an EHR vendor, I've presented several education sessions on alert fatigue over the years, including case studies, overviews of how clinicians think about warnings, clinician adherence to warnings, and best practice design approaches.
For one session, I randomly selected a hospital patient who happened to be on 19 active medications, 16 of which had severe or serious drug alerts. If I clicked the view to show all alerts (including moderate and minor), the total was 46.
We can't solve this without mutually agreed-upon solutions. Vendors, providers and the owners of alerting (e.g. third party content vendors) need a voice at that table.
As vendor, we aim to ensure that we provide training and resources that are readily available, as well as on-going support to ensure facilities can return to use with any questions, comments, or concerns. I'd love to hear feedback on preferred methods of delivery for training/support/resources.
EMR solutions have become extremely complicated, enterprise-grade technologies that require world-class project management and technical knowledge and skills to implement successfully. (The team may be more important than the tools.) However, true success after implementation begins with the intertwined concepts of engagement and governance. All stakeholders must be at the table and have skin in the game, from RFP to ongoing enhancement and advancement. No investment in training, support, or enhancement after implementation will be effective if engagement and clear, representative governance (deciding and delivering) weren’t established well before implementation. From that foundation, make sure you haven’t sunk all your resources on go-live. Track and act on utilization data, train leaders and hold them accountable for the success of their clinics and departments, provide well trained at-elbow support/coaching, enable IT staff to make adjustments through a well-structured and efficient change management process in response to end-user needs. Piece o’ cake ;)
Only through real world testing by multiple types of clinicians can vendors be sure that there are no (or minimal) unintended consequences which could negatively affect patient care. While the data collected seems to be more and more complex, the time needed to appropriately document could have the effect of actually limiting the amount of useful documentation due to time constraints or ease of data entry. While a vendor may want to be able to "measure" the effect of an upgrade or altered process, sometimes these things may not lead themselves to strict measurements which can translate into useable data. Therefore both a measurable and subjective response should be considered to upgrades with the physicians/ clinicians helping direct those priorities which should be addressed sooner.
We can create beautifully intuitive and efficient EHR designs. We can collaborate with our client for a successful implementation. But we cannot know for certain that the software is used as designed unless we revisit it in the healthcare setting and assure end users are adhering to recommended workflows. These optimization visits, performed at intervals after implementation, can identify and correct misuse before it becomes a widespread pattern. It's also a great opportunity to find out that while the design tested well, the workflow wasn't adopted because it didn't align with real-world usage. This also applies to enhancements of existing software. Essentials during feature development include: Collaboration with clinicians during requirements gathering, iterative testing with end users across rolls and practice sizes, and carefully monitoring of beta testing feedback. But after GA release, only monitoring of adoption and observation of usage will definitively validate that the design was as beautifully intuitive and efficient as it was intended to be.
Completely agree here! Would you also recommend assigning "super users" who can champion the technology at the facility? Would you imagine these people be leaders at the optimization visits? Do you believe these optimization visits must be on-site or could they be conducted remotely with success?
Works as intended? Ensures patient safety?
Great question. Health system leadership sign the contract and bear responsibility for setting and managing expectations through policy and procedure development and adherence monitoring. But the single most important thing they should do to assure a smooth implementation is to take action before purchase by engaging and listening to what end users think about products in consideration. They're making a significant investment, but it will be painful and maybe unsuccessful if it simply doesn't work for the staff who live in and rely on the application every day.
In my experience, the smaller the health care system, the more connected leadership are with the actual work performed. I've met hospital chief officers - nursing and medical - who are the key decision makers in EHR purchasing, and they also are managing schedules and providing direct patient care. Very different from talking with large system chief officers who and are focused on administrative concerns. Success for the latter means they have to assure they're being advised by practicing staff, whether it's billing office coders, providers, nurses or techs. And that they're listening to their feedback and concerns.
Equally important is to have a clear grasp on what implementation means: There will be a learning curve, and staff should be supported throughout. Extra staffing, reduced scheduling, and the development of superusers can ease implementation burden.
Is this happening in organizations now? Is the variety of approaches across physicians necessary for achieving outcomes? We should not require humans to adapt, but if there are 'most efficient/safest' workflows, should they be the focus of HIT?
Pen and paper pretty much worked the same way wherever you went. Digital systems on the other hand can have a myriad of ways of entering information and doing work as well as multiple ways to do the same thing in a single workflow; some intuitive, some not. Physicians frequently ask for the single best way to do something in the EHR, so there is some desire to standardize workflows for efficiency. Standardized workflows can also gently guide physicians to best practice if they make it easier to do the right thing than doing otherwise. Gaining consensus and agreement amongst clinicians can be difficult though especially when best practice, policy, or guidelines are part of that workflow rather than just pure efficiency gains. It all kind of goes back to the old line, “everybody wants change but no one wants to change”. I think a good question to ask also is how to standardize workflows without clinicians feeling loss of autonomy in their practice; that they are not just practicing cookie cutter medicine.
Standardizing to improve efficiency: I love the concept, but too much standardization can create rigid expectations that don't match user preferences and behaviors across individuals, specialties and roles.
I was involved in a meeting with about 20 physicians, PAs, and NPs from a multispecialty practice that included primary care, cardiology and orthopedic clinicians. We discussed a potential solution for medication reconciliation, and you can probably imagine how that went.
Everyone in the room agreed that validating the accuracy of the med list was important, but few wanted to take responsibility for documenting. One of the surgeons said "If they've stopped taking their blood pressure medication, I don't want to own that or spend a minute on it."
On the surface, if you're a vendor with no clinical experience, med rec design looks straightforward.We know that it isn't. That's why field research by UX professionals is needed to assure design success. Iterative testing across roles, practice sizes, and specialties is a significant investment, but it's the only way to come up with a mutually agreeable integration of technology and workflow.
Equally important is post-release monitoring to assure successful use of standardized solutions and processes. The design may appear intuitive, sailing through alpha and beta testing with a selected group of users, but falls flat when it comes to adoption rates or being used as designed post-release. Only then, when workflows have solidified, can the vendor and the healthcare system know that the technology is safely used or that deviance has normalized. Again, a significant but vital resource investment.
Identified misuse of the application requires attention and correction by strong leadership from the healthcare system. Widespread misuse means the vendor has to reassess the functionality to figure out where the disconnect is.
I think it would be helpful to describe what workflows are being discussed.
Do they include, for example, clinical pathway workflows in which diagnostic assessments, related recommended treatment procedures, and other care process are presented along a timeline, which includes associated data inputs and feedback?
What about workflows for annual physicals (or wellness visits), prescriptions, and referrals? Or for transition of care?
Clinical workflow is already standardized in many respects. Patients checks in at the front desk, MA (if available) takes vitals, documents allergies verified medications, and the doctor then writes notes in a standardized SOAP format. Somthing similar happens in inpatient arena/OR etc. These all are the legacy of paper charts variably implemented by various EHR. I feel the software developers need standardization, the formula for providers is already quite standardized. The electronic transformation of paper legacy is often substandard, all across the board.
What exactly do you mean by standardization from clinicians?
Clinical workflow in a primary care clinic is a dance. Not a party... a dance. There may be agreed upon roles and moves, but the order, inclusion, timing and initiator can vary significantly from one visit to another. We would do well to focus on the guard rails rather than dictate an explicit path - for patients and providers. In those rare instances where strong evidence exists, the guard rails will be pretty close together. The rest of the time we need space to explore, and learn, and grow and be held accountable for our behaviors and their outcomes.
There are two sides to this question; vendor and client. The EMR cannot just be about Function as determined by engineers. Form, Function, Workflow, Experience, Interaction, and Content as defined by users are all critically important. From a vendor perspective rigorous human factors research and testing with real users is necessary before releasing new or updated solutions. I think many EMRs realize this now and are working proactively. What we haven’t been so good about is doing that same research, design, and testing iteratively. Testing a new solution or design early is important but that same testing needs be repeated at intervals after release after 3, 6, 9 months etc. Questions need to be asked. Is it as usable as we thought it was? Is it still usable or have new frustrations appeared as users become accustomed to the new workflow? Does the experience support new users and experienced users the same way and should it? Could users adjust to a more complex or evolving experience as they shift from Novice to Expert user over time? Is the experience consistent across the solution? These are all best practice questions that vendors should be asking to constantly improve safety and usability. From the client side, all relevant stakeholders need to be in the room when critical implementation questions are being asked. Pharmacy, physician, nursing, therapies should be a part of that process equally and always represented. Governance is key. Questions to be asked are; Does this workflow have negative side effects downstream for someone else and is that acceptable? Who actually is the proper or best person to be doing this part of the workflow. When should users be alerted meaningfully? What is policy? What is best practice? what is habit? There are many decisions to be made when implementing the EMR and regardless of the base usability of the software, poor or one-sided implementations can have significant impact on even the most usable solutions.
User testing is the key, as is putting a clinically sensible first-pass design in front of users to begin with. Having clinicians involved in the early stages - prioritization of enhancements and new functionality, requirements gathering, and initial design concept development based on current standards (e.g. Partnership for HIT Patient Safety and SAFER Guides recommendations).That should be followed by a significant investment in iterative testing of those concepts by usability professionals engaging a wide range of end users - not just the biggest, loudest, or most strategically important clients.
I very much agree that user testing throughout design and development is required for this. We've found that we catch more potential patient safety issues during these activities than any other part of a user-centered design process. Something else I would add into the discussion is the use of measurable usability goals. Defined early based on an understanding of what we're trying to improve and what users need/expect and then during formative testing evaluating the product/module against those measures to find out if we're on track. These should be objective as well as subjective measures to paint a full picture of the experience. I'd also say that the broader that we can make the use case/scenarios in testing, the more likely we are to find issues. Instead of just testing how easy it is to order a specific medication, it should be done as part of a richer scenario that brings several modules/sections together to form a real task.
All users and roles can and should play a role in improving EHR usability and safety. It is incumbent upon the vendors to ensure that clinicians are involved in the design and testing of their solutions in concert with experts in Human factors and UI/X design. But it cannot stop there, “bench” research needs to be performed not only by vendors but also by the general medical community. Users should feel empowered to become involved in these efforts and to report safety and usability issues to their community as well as to the creators of the EHRs. Vendors need to be open and receptive to user feedback in an honest manner, to research concerns and complaints, and to address them in a timely fashion. Medical professional organizations, such as the AMA, ACOG, AAFP, etc should be integral partners with clinicians, vendors, and the regulatory community in working to together to provide a safe, efficient and pleasing experience.
I fully agree with Jeff's comments above. Clinicians - preferably experienced in patient safety - should be embedded in all phases of the software development life cycle, as should User Experience professionals. Vendors should have a specific process for the identification, verification, communication, mitigation and remediation of potential safety hazards, a process led by patient safety professionals. It should include client awareness of not only how to escalate safety hazard concerns, but also how to use the application safely.
I agree with previous comments and I think most would people say that "everyone" should be involved, but I feel like to this point it's a lot head nodding and less on hard action. HIT fits the definition well of a complex socio-technical system and is a great illustration of what happens when we optimize for some stakeholders: those who are not optimized for are beyond inconvenienced. They are in "meltdown" mode. I don't think we've fully explored what it would mean to not optimize for payment and regulatory stakeholders (to name two). The system has relied on clinicians to adapt and maintain safety and outcomes and unfortunately its taken reaching the saturation point for us to address how the system can be better designed. User-centered design, usability testing, clinician frequent input, design standards are necessary but not sufficient if we don't address this complex system and how it's optimized for seemingly everyone but clinicians.
Absolutely true, Ross. Regulatory, accreditation and state-specific requirements increasingly rely on the clinical documentation system to capture data, which means it's on the vendor to engineer solutions that minimally impact clinicians in the moment of care. As vendors we can come up with back-end mappings and other solutions that reduce clinician work to satisfy regulatory measures, but it requires significant resources to do that, and the results aren't necessarily satisfactory to end users. The hospital nurse and the ambulatory PCP aren't worried about their state's vaccination registry - they just want to easily document that the patient received an tetanus shot.
One clarification I'd request - how are you defining "exposing" usability issues? From a vendor point of view, we want to identify issues prior to making functionality generally available, and we rely on user experience human factors engineers, usability specialists and designers to uncover potential problems early, and then use those findings to inform iterative design and testing with end users. The vendor's patient safety professionals should be avid consumers of user research, including helping the development teams understand the potential safety hazards related to usability. This also includes identifying, mitigating, communicating, and remediating production environment issues by influencing development teams to prioritize solutions for usability-related hazards in the production environment. There's plenty of opportunity for us - vendors and healthcare systems as partners in safety - to improve sharing of common usability issues across vendors and products in a way that avoids exposing design details.
Thank you Trisha. I was referring to the use of vendor specific screen images to highlight usability concerns that contribute to patient safety concerns. I would refer you to
ehrseewhatwemean.org
Shared responsibility principles could help but have been harder to operationalize. Would love to hear experiences of others and see what has changed since then. Here's what we wrote earlier:
"Additional points of leverage include addressing the lack of shared responsibility in the current EHR software license agreements that typically favor developers with respect to non-disclosure provisions and intellectual property (IP) protections, performance warranties, and indemnity and limitation of liability provisions.These clauses are widespread. Non-disclosure and IP provisions need not be broader than reasonably necessary to protect developer's IP interests. Health systems could negotiate stronger exceptions to strict IP provisions to ensure sharing of information such as screen-shots or voluntary reporting of EHR-related adverse events to balance the needs of both parties. Moreover, agreements must recognize that health systems and EHR developers play complementary, but not necessarily equal roles in ensuring safety. The agreements should follow basic principles of tort law which provide that each party is responsible for its own acts and omissions, rather than including indemnification and limitation of liability provisions which typically favor the developer."
Reference - sciencedirect.com/science/arti...
I totally agree, this flows down to the fact that often the patients can't share their information easily due to these (sometimes overreaching) IP protections.
The top usability issue is not knowing what usability issues are out there because of the obstacles to exposing and sharing them.
Scores of studies exist describing common EHR usability issues - a (super quick) PubMed search returned 163 articles in the past 5 years. The most common themes are data entry, display and availability, alerting, and EHR design and workflow mismatch. Here's an AMA article with a high-level overview from last fall: ama-assn.org/practice-manageme...
Step back and consider the ‘why’ - as a physician, can you find what you want and do you believe what you see? Usability begins (but doesn’t end) there.
I found this JAMA article from 2/4/19 that discusses how usability challenges in the last decade have had unintended consequences and how poor EHR usability contributes to errors that are associated with patient harm. Check out the recommendations made to solve these problems: jamanetwork.com/journals/jama/...
From a big picture view, the biggest usability issue is that too often clinicians feel like they are working for the HIT and not the other way around.
Many EHR allow the "owning" institution to modify the design of their EHR, ie, they can manipulate the source code. When this occurs (eg Epic), it allows the owning institution to redesign the EHR to meet the needs of their users. In this situation, the EHR's "usability" would vastly improve if the person who has the ultimate authority over all EHR design/implementation decisions:
1) is required to use the EHR on a daily basis
2) has deep knowledge about clinical medicine
3) has some experience in computer programming
4) has an in-depth understanding about information technology and
5) is committed to evidence based medicine while acknowledging its limitations.
All great comments! I think most could be put under the category User Experience (UX) issues/dissatisfaction.
I'd like to suggest one more: As healthcare moves increasingly toward increased value ("performance" for the "cost"), EHRs are increasingly expected to do more than they're to do. That is, usability is being associated with usefulness. Usefulness in this regard is being defined as the ability to help clinicians and their patients make valid and reliable decisions and actions that consistently result in great outcomes at minimal expense. And these exceptional outcomes must be "whole-person" focused, i.e., not only must there be improved physical wellness, but also better emotional/psychological well-being and quality of life that include dealing with SDOH factors.
It seems to me that this requires a great deal more than today's EHRs can possibly do...Maybe never do!
I think it's best to limit the expectations of EHRs and the scope of their capabilities to what are their core competencies. Then add other types of HIT tools that can easily and securely access EHR data, combine those data with data from other sources collected by other tools, and build next-generation composite HIT systems accordingly. That's where things should be headed, imo.
Several things need to change:
1. Global perspective: A number of EHR are designed with good intentions though with meaningful revolution most are sucked into satisfying the requirements. The primary goal of EHR is a friendly interface, which coordinates with other systems (labs/ pharmacy etc) which replaces the need for paper chart. Unfortunately the majority of EHR are failing with this primary goal
2. EHR usability: Compare EHR with software like Microsoft office or gmail. These software are well designed and not clunky. The typical EHR is slow, clunky, unnecessarily restrictive and takes on the position that it has to police it's users. It is sort of like at one point Microsoft Internet browser was so clunky and took forever to load, people abandoned it for newer version. For EHR unfortunately this is not that easy. Once in contract a hospital/ practice is stuck. One can not decide one day time to move on to new EHR. This monopoly needs to be broken.
3. Innovation: The EHR seems to be so bogged down with catching up, really no/minimal innovation is coming out. Apart for billions of dollars of healthcare money being funneled to them, the improvement from paper chart can be counted on fingers.
4. Lean down: EHR are progressively getting more expensive even for smaller groups and even for smaller hospital systems. A hospital's EHR budget increase means less resources somewhere else or compromise on patient safety somewhere. The group as a whole has to commit to cutting down the costs.
5. Patients; EHR are silos. One of my patients mentioned that she goes to six specialists and three use different version of EHR, she goes to labcorp which uses their own and then other three docs use other EHR providers. Plus the radiology hospital uses other. Each time she goes to a healthcare provider she fills lengthy form. EHR financial motivations with a bloated HIPAA scare puts pts at the very bottom.
@stephenbeller - Usefulness and UX/usability are often confounded and it is extremely helpful to acknowledge the differences and apply the appropriate mitigations. We expect a lot and will continue to expect more from our HIT investments. Tooling is essential, but will always play second fiddle to clinical workflow (though that can often benefit from focused optimization).
Jason, I understand that disambiguating the usefulness and usability terms constrains focus on EHR workflow competence. In fact, I believe that all clinical HIT should aim to avoid workflow disruption and, better yet, streamline workflows to lessen the burden on providers.
My previous comment was meant to make the case that increasing usability without bolstering usefulness is far from an optimal path forward. We should focus on both; I don't think we disagree.
That's a reason why I like the term "EHR system" in which an EHRs are augmented with other tools that enhance the EHRs' clinical usability and, ideally, to do so in a workflow positive manner.
While many definitions of usability include some form of "usefulness" in them (e.g., usable, useful, and satisfying), I have found that more often than not, we have to talk about usability and usefulness as separate to get people to understand and focus. I'm sure my team isn't the first to term it like this but we've gotten a lot of traction talking about usability (and all the design aspects that go into that to reduce cognitive burden) and "helpfulness" (how the product assists the user). We start any design effort with asking ourselves "what would make this MOST/MORE helpful for the user?" and then innovate around that. EHRs need a lot of work on the design aspects, but we won't make significant strides until our end users are describing their EHRs as "helpful".
Forty features which would improve EHR’s usability..
Following up on Ross's comment, I ask in what ways might EHRs be helpful to clinicians? One way is comply with gov't regulations to avoid penalties and reap incentive payments. Another way is to streamline payment submissions and maximize remuneration. And to track certain process compliance and outcomes for additional financial incentives. Also, to help write orders, make referrals and share data, remember patient details, receive alerts, and compile data for population health analysis. These are all important things that can help improve care.
My question is: What else is needed to increase care value? For example, would it be helpful for EHR to foster patient engagement; understand how a patient's mind and body interact to affect wellness and well-being, and what can be done to address biopsychosocial problems; enable the secure and fluid flow of data between disparate systems and apps; support knowledge feedback loops in which outcomes data are submitted to research registries where they are used to develop guidelines that EHRs present, and then track and submit the results of guideline implementation for the ongoing evaluation and evolution of each guideline's efficacy for CQI.
Am I missing or overstating anything? Are any of these things too much to ask for EHRs now or in the future?
In response to Dr. Beller's comments "What else is needed to increase care value? "
I would caution that everything needs to be tried but we need objective data that any added feature has real value before we begin "mandating," through meaningful use or other mandates, that this new feature be utilized by everybody.
The biggest failure in health information technology has been the assumption that more technology is better. Technologists and physicians have forgotten that we should "prove" our newest tools work before foisting them on the entire healthcare profession. In fact most new medicines/technologies either turn out to be of no utility, marginal utility or dangerous.
Scientifically rigorous, placebo-controlled, randomized clinical trials need to be conducted before we "mandate" a new technology is "required" for the entire healthcare industry.
I whole-heartedly agree with Dr. Zwerling's comment about not being overly prescriptive with mandates and the need for rigorous science.
As I wrote in 2006, the kind of HIT we need should support "practitioner-researcher collaborative networks that facilitate the development and evolution of evidence-based guidelines by, for example, including patient data and lessons learned from everyday practice, and by having clinicians offer ideas for research."
In 2011, I wrote that HIT should “Promote a strong and productive link between scientific research and clinical practice ('bench to bedside') by (a) delivering de-identified patient data from everyday clinical practice to central repositories where researchers use them in developing evolving evidence-based personalized guidelines and (b) propagating those guidelines—using clinical decision support functionality—without fostering 'cookbook' medicine or stifling innovation.”
We had previously applied these concepts in the late ‘90s as we developed a clinical pathways app for Merck UK. It enabled physicians to write orders that differed (were "at variance") from the preferred guidelines along with their reasons for doing so. This information focused on evolving the guidelines based through clinical knowledge from the field while avoiding rigid mandates.
A few years later we built a knowledge management system for the oil and gas industry that enabled engineers to submit lessons learned from the field (along with supporting data). This information was systematically reviewed and discussed by SMEs who determined whether they should be made into best practices or modify existing ones.
Dr. Agrawal reference is another indicator of usefulness issues, not usability. An error-prone EHR can provide excellent workflow capabilities, but if the data are unreliable, those data are useless (even dangerous). Thus, it's unwise to be content in addressing usability without simultaneously focusing on addressing usefulness; they are "two sides of the same coin!"