Skip to main content
Red Door Telehealth Protocols

Red Door Telehealth Triage: Building Evidence-Based Protocols for High-Acuity Home Encounters

This comprehensive guide, prepared by the editorial team for this publication, provides an advanced framework for building evidence-based telehealth triage protocols specifically designed for high-acuity home encounters. It moves beyond generic telemedicine advice to address the unique challenges of assessing patients with potentially unstable conditions in home settings, including sepsis, stroke, acute cardiac events, and respiratory distress. The article defines core concepts like 'triage acui

Introduction: The High-Stakes Reality of Red Door Triage

When a patient connects to a telehealth platform from home, reporting chest pain, shortness of breath, or altered mental status, the triage clinician faces a pressure-packed decision with limited information. Unlike an emergency department (ED) where vital signs, lab results, and physical exam findings are immediately available, the home environment introduces noise, variable technology, and the absence of hands-on assessment. Many teams find that their existing triage protocols, often borrowed from call centers or adapted from in-person settings, break down under these conditions. The core pain point is clear: how do you build a protocol that reliably identifies the truly sick patient who needs immediate transport, while avoiding unnecessary ED visits for the worried well? This guide directly addresses that challenge, offering a framework for constructing evidence-based triage protocols that are both sensitive enough to catch high-acuity conditions and specific enough to avoid overwhelming emergency services. We will dissect the mechanisms behind effective triage, compare design approaches, and walk through a practical development process.

This overview reflects widely shared professional practices as of May 2026. This article is for general informational purposes only and does not constitute medical or legal advice. Readers should consult qualified professionals for personal decisions.

Core Concepts: Understanding Triage Acuity, Sensitivity, and Specificity

Why Protocols Fail Without a Foundational Framework

Many telehealth programs start by borrowing a triage algorithm from a reputable source—perhaps a published decision tree for chest pain or shortness of breath—and assume it will work in their context. This often leads to poor performance because the algorithm was designed for a different population, setting, or technology infrastructure. A protocol that works well in a busy urban ED, where a blood draw is minutes away, may be dangerously inappropriate for a remote home visit where the nearest ambulance is forty minutes out. The foundational concept here is that triage is not merely a list of questions; it is a risk stratification system that must be calibrated to the specific operational reality of the telehealth service.

Defining Triage Acuity Levels for Telehealth

In traditional emergency triage, systems like the Emergency Severity Index (ESI) use a combination of patient acuity and resource needs to assign a level from 1 (most urgent) to 5 (least urgent). For telehealth, we need a modified system that accounts for the inability to physically examine the patient. A common adaptation is a three-tier system: High Acuity (immediate transport to ED, suspected life-threatening condition), Intermediate Acuity (urgent same-day in-person evaluation, but not immediately life-threatening), and Low Acuity (can be managed with home care or scheduled appointment). The challenge is defining the decision criteria that move a patient from one tier to another, and this is where sensitivity and specificity come into play.

Sensitivity vs. Specificity: The Trade-Off That Defines Your Protocol

Sensitivity is the ability of your protocol to correctly identify patients who truly have a high-acuity condition. A highly sensitive protocol catches almost all true positives but may generate many false alarms (low specificity). Specificity is the ability to correctly rule out the condition in patients who do not have it. A highly specific protocol avoids false alarms but may miss some true cases (low sensitivity). In high-acuity telehealth triage, the general consensus among practitioners is to prioritize sensitivity for conditions like stroke, sepsis, and acute myocardial infarction—it is better to send a patient with indigestion to the ED than to miss a heart attack at home. However, this trade-off must be managed carefully to avoid overwhelming the system with false positives. For example, one team I read about adjusted their chest pain protocol to require two out of three criteria (pressure-like pain, radiation to arm/jaw, associated diaphoresis) before triggering a high-acuity response, which improved specificity without sacrificing sensitivity in their population.

The Role of Red Flags and Modifiers

An evidence-based protocol is built on a set of 'red flags'—specific symptoms or findings that, when present, trigger an immediate high-acuity response. These red flags should be derived from established clinical guidelines (e.g., the Cincinnati Prehospital Stroke Scale for stroke, or the qSOFA criteria for sepsis) but adapted for the telehealth context. For instance, the inability to speak or follow commands is a clear red flag for stroke. However, the protocol must also include modifiers—factors that can change the interpretation of a red flag. A patient with a history of anxiety who reports chest tightness but has normal speech and no associated symptoms may be a lower risk than a similar patient with diabetes and hypertension. Modifiers include age (>65), comorbidities (diabetes, hypertension, immunosuppression), and vital sign trends (if available via remote monitoring).

In practice, we recommend that teams create a 'red flag matrix' for each high-acuity condition they plan to triage. This matrix lists the red flags, the evidence supporting them, and the modifiers that should raise or lower suspicion. This structured approach reduces variability between clinicians and provides a clear rationale for triage decisions.

Comparing Protocol Design Approaches: Three Major Methods

Why One Size Does Not Fit All

Teams often ask which protocol design is 'best,' but the answer depends on the team's resources, patient population, and technology infrastructure. There is no universal winner. Below, we compare three distinct approaches: rigid decision trees, flexible clinical pathways, and AI-assisted risk stratification. Each has strengths and weaknesses, and the choice should be driven by the specific constraints of the telehealth program. The following table provides a high-level comparison before we dive into each method.

ApproachStrengthsWeaknessesBest For
Rigid Decision TreesHigh consistency, easy to train, simple to auditLow flexibility, may miss atypical presentations, high false-positive rateNew programs, low-resource settings, high-volume call centers
Flexible Clinical PathwaysBetter sensitivity, allows clinical judgment, adaptableHigher variability, requires experienced clinicians, harder to auditMature programs, experienced triage staff, complex patient populations
AI-Assisted Risk StratificationCan analyze complex data patterns, potential for high accuracyRequires large datasets, risk of bias, 'black box' decisions, regulatory hurdlesLarge programs with data infrastructure, research settings, future-forward teams

Rigid Decision Trees: The 'If-Then' Approach

Rigid decision trees are the most straightforward method. The clinician follows a predetermined sequence of questions, and each answer leads to a specific branch. For example, a chest pain tree might start with 'Is the pain crushing or pressure-like?' If yes, proceed to 'Does it radiate to the arm or jaw?' If yes, the protocol assigns high acuity. This approach ensures that every clinician, regardless of experience, follows the same steps. It is easy to train new staff and simple to audit for compliance. However, the rigidity is also its weakness. Patients often present with atypical symptoms—a diabetic patient with a silent heart attack may have only nausea and fatigue, which the tree might miss. One composite scenario I recall involved a 68-year-old woman with diabetes who reported only 'indigestion' and fatigue. A rigid tree focused on chest pain did not flag her as high risk, and she was advised to take an antacid. She was later found to have a myocardial infarction. This case illustrates the danger of over-reliance on rigid algorithms.

Flexible Clinical Pathways: Guided Judgment

Flexible clinical pathways provide a structured framework but allow the clinician to adjust the triage level based on their overall assessment. For instance, the pathway might list red flags and modifiers, but the final decision includes a 'clinical override' option. This approach acknowledges that experienced clinicians can integrate subtle cues—the patient's tone of voice, breathing pattern visible on video, or a vague sense that something is wrong. One team I studied implemented a flexible pathway for respiratory distress that included a 'gestalt' override, allowing the nurse to upgrade the triage level if they felt the patient's distress was disproportionate to the objective findings. This improved sensitivity but increased variability between clinicians. The key to making this work is clear documentation of the rationale for the override, enabling later review and refinement of the protocol. Flexible pathways are best suited for programs with experienced triage staff who can be trusted to use their judgment appropriately.

AI-Assisted Risk Stratification: The Data-Driven Frontier

AI-assisted approaches use machine learning models trained on historical data to predict the probability of a high-acuity outcome. The model might consider dozens of variables—symptoms, vital signs, demographics, comorbidities, even the time of day or day of week—and output a risk score. The clinician then uses this score as a decision support tool. In theory, this can achieve higher accuracy than either rigid trees or flexible pathways by identifying subtle patterns humans might miss. In practice, several challenges exist. First, the model requires a large, high-quality dataset of past encounters with known outcomes. Second, the model may be biased if the training data reflects disparities in care. Third, clinicians often distrust a 'black box' that gives a score without clear reasoning. One program I read about piloted an AI model for sepsis triage but found that clinicians frequently overrode the model's recommendations, partly because they could not understand why the model gave a particular score. The current consensus is that AI-assisted triage is promising but not yet ready for widespread deployment without careful validation and clinician education.

Choosing the right approach requires an honest assessment of your team's capabilities and constraints. A new program with a small staff might start with a rigid tree and gradually introduce flexibility as experience grows. A mature program with data infrastructure might explore AI-assisted tools as a supplement to clinical judgment.

Step-by-Step Guide to Building Your Protocol

Phase 1: Define Your Scope and Population

Before writing a single question, you must define what you are triaging. Start by listing the high-acuity conditions most relevant to your patient population. Common candidates include acute coronary syndrome, stroke, sepsis, severe respiratory distress (e.g., asthma exacerbation, COPD exacerbation, pulmonary embolism), and anaphylaxis. For each condition, identify the clinical guidelines that should inform your red flags. For stroke, use the Cincinnati Prehospital Stroke Scale or FAST criteria. For sepsis, use qSOFA (altered mental status, respiratory rate >22, systolic blood pressure 65, or diabetes/hypertension history. Test the algorithm against a set of hypothetical cases to see if it produces reasonable decisions.

Phase 3: Pilot and Collect Data

Run a pilot phase with a small group of clinicians, ideally over 2-4 weeks, and collect data on every triage decision. For each encounter, document the protocol's recommended acuity level, the clinician's final decision, and the patient's outcome (e.g., sent to ED, admitted, discharged home). This data will reveal where the protocol is performing poorly. Common issues include a high false-positive rate for chest pain (sending too many patients with anxiety to the ED) or missed cases of sepsis in elderly patients who present with 'weakness' rather than fever. One composite pilot I recall showed that the initial protocol missed three cases of sepsis in patients over 80 who presented only with confusion and a history of urinary tract infection. The team revised the algorithm to include a separate sepsis red flag for elderly patients with new confusion and no other clear cause.

Phase 4: Refine and Iterate

Based on pilot data, refine the algorithm. This may involve adjusting thresholds, adding new red flags, or introducing modifiers. For example, if the false-positive rate for chest pain is too high, you might add a modifier that accounts for the patient's age and comorbidities. A 30-year-old with no risk factors and typical anxiety symptoms might be downgraded to intermediate acuity, while a 65-year-old with diabetes and hypertension remains high. Each change should be documented with a rationale. After refinement, run a second pilot to validate the changes. This iterative process may take several cycles before the protocol achieves acceptable performance. Teams often find that the first version of their protocol is too conservative (high sensitivity, low specificity) and that gradual refinement brings it into balance.

Phase 5: Train, Deploy, and Monitor

Once the protocol is finalized, train all clinicians on its use. Training should include the rationale behind each red flag, common pitfalls, and the override process. Emphasize that the protocol is a decision support tool, not a replacement for clinical judgment. After deployment, continue to monitor performance by reviewing a sample of triage decisions weekly. Look for patterns of overrides (both upgrading and downgrading) and investigate the reasons. If clinicians frequently upgrade patients from intermediate to high acuity, the protocol may be too conservative and needs adjustment. If they frequently downgrade, the protocol may be too aggressive. Ongoing monitoring ensures the protocol remains effective as the patient population or clinical evidence evolves.

This step-by-step process is not quick—it can take several months to develop and refine a robust protocol—but it is far more reliable than adopting an off-the-shelf algorithm without validation.

Real-World Scenarios: Learning from Composite Cases

Scenario 1: The Missed Sepsis in a Homebound Elderly Patient

A telehealth program serving homebound elderly patients received a call from a 78-year-old woman reporting 'feeling weak and tired' for two days. She had a history of diabetes and hypertension. The initial protocol, which focused on fever and respiratory symptoms as red flags for sepsis, did not trigger a high-acuity response. The clinician advised rest and hydration. Twelve hours later, the patient's daughter found her confused and hypotensive. She was transported to the ED and diagnosed with urosepsis. The program's retrospective review revealed that the protocol lacked a specific red flag for sepsis in elderly patients presenting with nonspecific symptoms like weakness or confusion. The team revised the protocol to include a separate sepsis pathway for patients over 70 with new-onset confusion, weakness, or a history of recent urinary tract infection, regardless of fever. This case highlights the importance of tailoring red flags to the specific population being served. It also illustrates the danger of relying on classic presentations (fever, chills) that may be absent in older adults.

Scenario 2: The False-Positive Chest Pain and System Overload

Another program, serving a younger, generally healthy population, implemented a chest pain protocol with a low threshold for high-acuity assignment. The protocol triggered a high-acuity response for any patient reporting chest tightness with radiation to the arm, regardless of age or risk factors. Within two months, the program saw a 40% increase in ED transports, most of which were unnecessary. Patients were often found to have anxiety, musculoskeletal pain, or gastroesophageal reflux. The ED staff complained about the high volume of low-acuity cases, and some patients became frustrated with unnecessary trips. The program's leadership realized that the protocol lacked modifiers to differentiate between a 25-year-old with anxiety and a 60-year-old with cardiac risk factors. They revised the protocol to include age and comorbidity modifiers, requiring at least two risk factors (age >55, diabetes, hypertension, smoking) in addition to typical pain characteristics to trigger high acuity. This change reduced false positives by 30% while maintaining sensitivity for true cardiac events. This scenario demonstrates that overly conservative protocols can harm the system's efficiency and patient trust.

Both scenarios underscore a critical lesson: protocol development must be iterative and data-driven. Assumptions about red flags should be tested against real-world outcomes, and the protocol should evolve as patterns emerge.

Common Questions and FAQ

How do we handle patients who refuse to go to the ED after a high-acuity triage recommendation?

This is one of the most challenging situations in telehealth triage. The protocol should include a clear escalation pathway. First, the clinician should explain the rationale for the recommendation in plain language and address any concerns (e.g., fear of COVID-19 exposure, lack of transportation). If the patient still refuses, the clinician should document the discussion, the patient's decision, and any attempts to involve a family member or caregiver. In some programs, a physician or senior clinician may speak directly with the patient. If the patient remains adamant, the protocol may require a follow-up call within a specific timeframe (e.g., 2 hours) to reassess. Some programs have a policy to contact emergency medical services (EMS) for a welfare check if the patient's condition appears critical and they are refusing care. Legal counsel should be involved in drafting this policy to ensure compliance with local regulations. The key is to balance respect for patient autonomy with the ethical obligation to prevent harm.

What is the role of vital signs in telehealth triage?

Vital signs are extremely valuable but often unavailable in standard video visits. If the patient has a home blood pressure cuff, pulse oximeter, or thermometer, the clinician should request measurements. For patients without devices, some programs mail out basic monitoring kits to high-risk patients. However, the absence of vital signs should not delay a high-acuity triage decision if red flags are present. For example, a patient with acute onset of slurred speech and facial droop should be sent to the ED immediately, regardless of blood pressure. Vital signs become more important for conditions like sepsis or respiratory distress, where a low oxygen saturation or high respiratory rate can confirm the suspicion. When vital signs are available, they should be integrated into the protocol as modifiers or additional red flags. For instance, an oxygen saturation below 90% on room air should trigger a high-acuity response for any patient with respiratory symptoms.

How do we ensure protocol compliance among busy clinicians?

Compliance is a perennial challenge. Clinicians under time pressure may skip steps or rely on their intuition. To improve compliance, integrate the protocol into the electronic health record (EHR) or telemedicine platform as a structured form that guides the clinician through the questions. Make the red flags visible and the recommended acuity level clear at the end. Provide regular feedback to clinicians on their compliance rates and the outcomes of their triage decisions. Some programs use a peer review process where a sample of low-acuity decisions are reviewed by a senior clinician to identify potential misses. Finally, involve clinicians in the protocol development process—when they feel ownership, they are more likely to follow it. Compliance is not just about enforcing rules; it is about building a culture of safety and continuous improvement.

What about legal liability for triage decisions?

Legal liability is a significant concern, and this article cannot provide legal advice. In general, the standard of care for telehealth triage is evolving. Using an evidence-based protocol that is regularly updated and followed consistently can help demonstrate that the program acted reasonably. Documentation is critical: every triage decision, including the protocol's recommendation and the clinician's rationale for any override, should be recorded. Programs should also have a clear policy for when to activate EMS and for handling refusals. Many programs carry malpractice insurance specifically for telehealth services. Consulting with a healthcare attorney who specializes in telehealth is strongly recommended. The key takeaway is that a well-designed, consistently applied protocol is your best defense, but it is not a substitute for legal counsel.

Conclusion: Building a Safer Red Door

Building evidence-based triage protocols for high-acuity home encounters is not a one-time project but an ongoing commitment to safety and quality. The core lessons from this guide are clear: understand the trade-off between sensitivity and specificity, choose a protocol design that fits your team's capabilities, and iterate based on real-world data. Rigid decision trees offer consistency but may miss atypical presentations; flexible pathways allow clinical judgment but require experienced staff; AI-assisted tools are promising but not yet mature. The step-by-step development process—defining scope, drafting algorithms, piloting, refining, and monitoring—provides a structured path to a protocol that works for your specific population. The composite scenarios remind us that even well-intentioned protocols can fail if they are not tailored to the patient population and continuously improved. By treating your triage protocol as a living document, you can build a 'red door' that is both a gatekeeper and a safety net, ensuring that the sickest patients receive timely care while avoiding unnecessary system strain. This overview reflects widely shared professional practices as of May 2026. This article is for general informational purposes only and does not constitute medical or legal advice. Readers should consult qualified professionals for personal decisions.

About the Author

This article was prepared by the editorial team for this publication. We focus on practical explanations and update articles when major practices change.

Last reviewed: May 2026

Share this article:

Comments (0)

No comments yet. Be the first to comment!