Can’t sign in? Forgot your password?
Enter your email address below and we will send you the reset instructions
If the address matches an existing account you will receive an email with instructions to reset your password.
Can’t sign in? Forgot your username?
Enter your email address below and we will send you your username
If the address matches an existing account you will receive an email with instructions to retrieve your username
Your password has been changed
Health quality advocates have championed the idea of a Learning Health System for decades as the theoretical key to translating clinical research into clinical practice guidelines for use at the point of care.
A Learning Health System is a closed loop that melds current evidence and clinical experience to promote continuous improvement. It supports clinical decision making, assesses outcomes, and delivers new guidelines into clinical practice. As the Agency for Healthcare Research and Quality puts it, “As a result [of the Learning Health System], patients get higher quality, safer, more efficient care, and health care delivery organizations become better places to work.”
This new clinical decision support model maintains traditional clinical rigor but also takes advantage of data science methods, artificial intelligence, and analytics tools to accelerate the clinical knowledge lifecycle. The Learning Health System’s efficiency shortens guidelines development and their integration into clinical practice from decades—“17 years” is conversational shorthand among clinicians to describe how long it takes to translate research to practice—to weeks.
The prospect of meshing expert knowledge with emerging data to create clinical guidelines was, until recently, not much more than an interesting theoretical conversation-starter for policy makers, care quality advocates, and academics. The ability to aggregate and process the volume of complex data just wasn’t there.
That has changed at last. Current technology has finally caught up to the idea of the Learning Health System. The theoretical recently became technically possible, thanks to the rise of cloud computing coupled with federal investment in electronic health records (EHRs) and the concurrent rise of the Fast Health Interoperability Resources (FHIR) standards.
Then came a catalyzing event, the ultimate use case to test the concept of a Learning Health System: a novel, fast-spreading, coronavirus that erupted into an international pandemic during a span of weeks. No one had seen it before, and yet every clinician soon had to confront the challenge of just how to effectively treat this emerging infectious disease.
In March 2020, an influx of emergency department patients, social distancing mandates, and pandemic response measures severed the usual health care processes and procedures. In no time, COVID-19 patients overwhelmed many US hospitals for the remainder of the year. By February 2021, more than 10 million US and UK patients awaited rescheduled surgeries or experienced delays in cancer treatment as COVID-19 disruptions continued.
There wasn’t the luxury of 17 years to discover and translate research to determine when to admit emergency department patients into the hospital. There also wasn’t a playbook to treat COVID-19, just some early working knowledge from Asian and European outbreaks that began a few weeks ahead of the United States, and prior understanding of related viruses that caused SARS and MERS.
We, and other representatives from the American College of Emergency Physicians (ACEP), EvidenceCare, Apervita, the COVID-19 Healthcare Coalition, the University of Minnesota, and many other organizations joined the COVID-19 Digital Guideline Working Group, an offshoot of the C19 Healthcare Consortium (led by Mayo and MITRE). Together, we developed a clinical guideline for COVID-19 severity classification within weeks, instead of the typical years guidelines take to move from evidence to adoption.
It took a cast of dozens of clinicians, clinical informaticists, and technologists—some of them from competing entities, setting aside business rivalries and working together—to build this COVID-19 clinical practice guideline. A severity classification for COVID-19 would offer a rubric for clinicians to more efficiently determine which patients should be admitted to the hospital, an aid for decision making when time was short at a time when, in some areas, hospitals were overrun with COVID-19 patients and needed every square inch, and every spare minute to focus on these patients. A guideline approved by top emergency physicians meant that each facility treating COVID-19 patients didn’t have to start from scratch learning how to triage for this emerging infection through empirical observation; instead, the data and knowledge from tens of thousands of COVID-19 cases was brought to bear in a guideline that clinicians could trust.
Because there were few studies to work from, the workgroup performed rapid reviews of low-level evidence to create, and agree upon, an initial framework. This was based on a “first best guess” from Italian and other European outbreak data and resulting studies on how clinicians quantified COVID-19 severity for triage purposes.
From this initial framework, the workgroup drafted digital representations of clinical guidance. Both the guideline and its digital representation sharpened as more evidence emerged. Health care organizations added to the “first best guess” guidelines as real-world patient data poured in, combined with front-line clinical knowledge from treating physicians. Tasks such as clinical concept identification, value set specification, test case creation, and key inference and decision logic were defined in parallel with the narrative—rather than the traditional sequential approach. Once assembled, a national group of ACEP clinical experts reviewed and approved these best practices.
This new process—agile knowledge engineering—weaves emerging evidence and data with clinical knowledge to form faithful, computable expressions of best-practice recommendations.
As ACEP experts agreed on what data points were relevant to measure COVID-19 severity, technology vendors joined in to help assemble the recommendations in digital form. They engineered shareable, interoperable, computable practice guidelines (CPGs) deployed in a manner that could be rapidly integrated into clinician workflow.
The COVID-19 severity classification guideline and corresponding digital care guidance dashboard were developed concurrently. Data and inferences (for example, risk scores, severity scores) were implemented in the clinical workflow of providers. Recommendations and patient-specific suggestions were made available to provide cognitive support for clinicians.
The resulting care guidance dashboard shows recommended treatment pathways based on 120 data points for an individual patient’s demographics, risk factors, and test results. On the technology side, it required pairing the EHRs’ data and clinical workflow languages with the clinical concepts contained in the guideline to assure coherence between guideline logic and EHR data—across disparate EHRs.
The Veterans Affairs (VA) Health System is adopting the seven-step COVID-19 severity classification guideline and will eventually be used in 171 VA medical centers. The Fairview Health System is adopting it for use across 11 hospitals. A free version of the ACEP COVID-19 anticoagulation guideline is freely available on the web. This site presents the text version with recommendations for COVID-19 care including anticoagulation. It is not a “computable form;” however, so it is not integrated into a user’s EHR workflow. A second guideline, which helps physicians determine if anticoagulation plans are indicated as part of diagnostic and treatment for SARS-CoV-2 positive patients, is nearing completion.
For the anticoagulation guideline, hematology, critical care, and hospital-based physician experts worked out a tiered approach to treating patients based on early evidence, much like the severity classification guideline. That served to inform a wireframe to build a guideline based on patient factors such as age, COVID-19 severity, D-Dimer lab values, body mass index, and renal function to determine if medication therapy was indicated; which combination of medications to use; at what dose and frequency; and other determinants.
The 12-hospital M Health Fairview system in Minnesota currently is using a version of this anticoagulation guideline.
Unlike a static PDF flowchart guideline that gets published and then may possibly be revisited years later, the COVID-19 severity tool was updated four times in its first nine months. For these computable guidelines, researchers then collect data from patients who were treated using initial guideline recommendations and then will further update the recommendations as more evidence becomes known. Guideline developers such as the ACEP continue to update the guidelines to make them more usable, too. The version released on August 15, 2021, added Smart Phrases, blocks of text that clinicians copy and paste into hospital EHR systems to automatically create portions of discharge summaries. The risk-severity matrix used to inform disposition from the emergency department for COVID-19 patients was further updated based on more recent studies and thoroughly vetted evidence.
These interoperable, computable guidelines represent a new wave of team-based, technology-enabled clinical guideline development. Necessity is the mother of invention; it took a fast-evolving public health disaster on the scale of COVID-19 to inspire disparate groups of health care professionals to band together and forge the processes that would operationalize the Learning Health System idea into a workgroup that would convert distributed clinical knowledge into self-improving clinical practice guidelines.
The agile knowledge engineering techniques from which the COVID-19 guidelines were built borrow a few concepts from agile software development. Historically, software engineering proceeded lockstep through a series of activities: requirements assessment, workflow assessment, design, quality assurance, user-testing, and so forth. This approach reflected “waterfall” software development, in which one phase must be completed before the next begins. Today, that method has been replaced by Agile: With cloud software and application delivery through websites and app stores, most smartphone, tablet, laptop, and desktop computer users are accustomed to installing light, continual updates to the many apps they use—monthly, or even weekly.
These are tiny, incremental updates compared to the long diskette updates of the 1990s. Updates developed with this agile approach add a new feature or two, or patch security to protect our data as new risks emerge.
Clinical guidelines should work in the same way, refreshed often as medicine discovers new understandings of a disease. Treatment, diagnosis, and management evidence should modify or update care management recommendations as it becomes known, not years later. But the health care system still operates on a fragmented, siloed, and diverse set of information technologies that is just starting to catch up to the more modern information technology architecture that consumers enjoy with their many personal devices, apps, and services.
The COVID-19 guidelines prove that it is possible to create point-of-care decision support based on the latest clinical best-practice guidance in an agile fashion, which initiates a true learning feedback loop. They also prove that guidelines may now exist in more sophisticated, machine-interpretable versions that can quickly be distributed to multiple care sites simultaneously, even if they use disparate EHRs. They can also be distributed as an embedded iFrame in the EHR, as data enrichments and web services into clinical information systems such as EHRs, or a full Substitutable Medical Apps, Reusable Technologies (SMART) on Fast Health Interoperability Resources (FHIR) app. “SMART on FHIR” apps, as they’re called, use an industry-standard app framework (SMART) and data transport over a standard API (FHIR).
Instead of the old guidance flowcharts and PDFs, CPGs can act like little applications, interacting with the clinician to render problem- or procedure-specific recommendations and guidance for the individual patient in front of them based on individual patient data (vitals, lab results, and so forth). Care guidance dashboards and apps in EHRs can reduce physician cognitive burden by readily summarizing all relevant patient data and recommending appropriate actions for a clinical scenario. These CPGs concurrently capture patient- and practice-specific data on guideline use for quality assessment and reporting as well as reporting for clinical and outcomes research (registries) to inform the next rapid iteration of best-practice guidance, thus enabling a truly Learning Health System.
CPGs are built within the expert trust framework physicians rely upon, providing simple cognitive support and convenient opportunities to take action on suggested best practices and next steps.
When institutional knowledge from a trusted source such as the ACEP leads the process and the technology plays a secondary role supporting the experts, it provides a foundation that’s stronger than simply asking physicians to trust technology. Instead, they’re contributing their practice experience to the learning system and trusting their expert peers who using the agile knowledge engineering approach can rapidly create best-in-class guidelines that use their own patients’ data and are delivered in their clinical workflows.
As physicians learn how to treat patients for a condition, technology is there to record it. In other words, when human experts validate clinical knowledge as it emerges, together with technology, they advance the Learning Health System cycle, as illustrated in exhibit 1.
Source: Kaley Simon and Laura Passero
In general, the pool of evidence from which physicians can draw grows every day. Yet, only a little more than half the time do physicians abide by known best-practice guidelines. Agile knowledge engineering could help improve the quality of care by getting current evidence off of dated PDFs and flowcharts, and on to screens and devices via the EHR and elsewhere, where and when physicians need it most.
Our COVID-19 computable guideline process also demonstrates the potential for creating adjacent guidelines, perhaps for treating new COVID-19 strains as information comes in. Vaccine triage—with booster shots for new variants coming in the pipeline and a rapid-review process already in place for them—and related care are also good candidates for new guidelines. In this case, guidance creation becomes simplified as reusable parts of CPGs are created and reused in updates or new CPGs.
The National Institutes of Health recently announced up to $1.5 billion in research funding to investigate the long-term recovery some COVID-19 patients experience. Part of it will fund the use of real-world data to develop clinical decision support technologies for effective treatment for COVID-19 “long-haulers” using this same agile knowledge engineering approach.
The COVID-19 rapid-cycle CPG process shows how other common conditions such as heart disease, stroke, and diabetes and their therapies (for example, procedures and pharmaceuticals) can also potentially be addressed with the Learning Health System. New CPGs could keep current evidence in front of physicians at the point of care, where they need it, in detailed yet at-a-glance, decision-driven dashboards.
CPGs are one step toward accomplishing the Quadruple Aim of improved patient experience, improved quality of care, lower cost, and improved clinician experience.
The approach used to create the COVID-19 CPGs can apply to much of the knowledge management and implementation challenges that have bedeviled the standardization of clinical processes and reduction of unwarranted variation in practice. Such digital guidelines can also support real-world, pragmatic clinical trials in a standardized way across care settings and sites, and they can help to accelerate and streamline the delivery of clinical data to public health agencies, and the provision of community (and social determinants of health) -based clinical guidance.
Because they are digital in their nature, these guidelines, paired with the data in the clinical record, can show researchers how variance in clinical practice for a particular guideline affects patient outcomes.
The Learning Health System—fully digitized and benefiting from data liquidity and knowledge interoperability—can get US health care to the Quadruple Aim’s goals faster than the traditional routes of guideline development and dissemination.
Can’t sign in? Forgot your password?