In the hour of crisis, speed saves lives.
But haste without wisdom takes them.
O you who seek to guide the healers,
have you witnessed what happens when evidence moves too slowly?
And have you seen what happens when it moves too fast?
January 2020
A novel coronavirus emerges in Wuhan.
Patients are dying. There is no treatment.
The world demands answers.
How long does a systematic review take?
Traditional Systematic Review
Pandemic Timeline
Have you not seen how the world responded?
Speed created a flood. Who would separate truth from noise?
The Question That Haunts Us
In March 2020, France approved hydroxychloroquine for COVID-19 based on one small, flawed study. Millions took it before larger trials showed no benefit—and possible cardiac harm. Meanwhile, the UK's RECOVERY trial tested dexamethasone rapidly but rigorously. Within 3 months, they proved it reduced deaths by one-third in severe cases.
Both countries faced the same urgency. One acted on weak evidence, one generated strong evidence quickly.
Speed without rigor cost lives. Rapid rigor saved them.
You are a health minister in March 2020. A small study claims a common drug cures COVID-19. What do you do?
This Course Will Teach You
By the end, you will be able to:
- Conduct a rapid review in 40 hours, not 40 days
- Know when "good enough" is actually good enough
- Identify fraud and bias in fast-moving evidence
- Use decision trees for methodological trade-offs
- Understand living systematic reviews
- Learn from the hydroxychloroquine and ivermectin disasters
Behold the Five Principles of Rapid Evidence:
1. Speed without rigor is recklessness.
2. Rigor without speed is abandonment.
3. A flawed study in a meta-analysis poisons all.
4. Transparency reveals; concealment deceives.
5. Living evidence serves the living.
The Surgisphere Catastrophe
In May 2020, The Lancet published a study of 96,000 patients from 671 hospitals across six continents. It showed hydroxychloroquine increased death rates. The WHO halted trials worldwide.
But the data came from Surgisphere—a company with 11 employees, including a science fiction author and an adult content model. The database did not exist.
Within 13 days, the paper was retracted. But trials had already been stopped. Time was lost. Trust was shattered.
How did two of medicine's most prestigious journals—
The Lancet and NEJM—publish fabricated data?
Speed pressure broke the gatekeepers.
Decision Tree: Should You Trust This Study?
Proceed with standard appraisal
Check for outcome switching
Request data before including
Module 0 Quiz
What should you advise?
Consider the Three Witnesses of 2020:
Nine months from hope to verdict. How many decisions were made in between?
The Chloroquine Poisoning
Within days of the hydroxychloroquine announcement, Nigerians began self-medicating with chloroquine—a related but more toxic drug.
Three deaths. Multiple hospitalizations for overdose. The Lagos state poison center was overwhelmed.
In Arizona, a man died after drinking fish tank cleaner containing chloroquine phosphate. His wife survived but was hospitalized.
Premature evidence claims killed people who never had COVID-19.
The Story of Thalidomide:
In 1957, thalidomide was marketed as a safe sedative for pregnant women. The manufacturer shared incomplete safety data—animal studies were limited, human trials brief. Doctors, trusting the company's reputation, prescribed it widely.
By 1961, over 10,000 children were born with severe limb malformations. The "map" of safety had a fatal gap no one disclosed.
Incomplete evidence, presented as complete, caused tragedy that reshaped drug regulation forever.
You are a drug regulator in 1957. A pharmaceutical company submits thalidomide for approval with limited safety data. What do you do?
Decision Tree: When to Share Preliminary Findings
Wait for confirmation
"Preliminary—may change"
Complete the analysis
In the hour of crisis, speed saves lives.
Module 1: The Trade-Off
Speed without rigor is recklessness.
Rigor without speed is abandonment.
Have you considered what we sacrifice for speed?
A systematic review is thorough because it is slow.
A rapid review is fast because it makes trade-offs.
What are these trade-offs?
Systematic vs. Rapid: The Differences
| Element | Systematic Review | Rapid Review |
|---|---|---|
| Databases searched | 5-10+ | 1-3 |
| Grey literature | Extensive | Limited or none |
| Screening | Dual independent | Single + verification |
| Quality assessment | Full tool | Abbreviated |
| Timeline | 12-24 months | 1-6 weeks |
The Cochrane Response
The guardians of rigor themselves said: "We must move faster."
Decision Tree: When to Use Rapid Review
40 hours
SR
Standard timeline
What risks do we accept?
Missing studies: Limited search may miss relevant evidence
Selection bias: Single screener may have systematic blind spots
Quality gaps: Abbreviated assessment may miss fatal flaws
Publication bias: Less grey literature means more bias
These risks must be explicitly stated in every rapid review.
And yet, consider this:
On June 4, 1944, Eisenhower faced an impossible decision. His meteorologist, Group Captain Stagg, couldn't guarantee good weather—only 80% confidence of a brief clearing on June 6. Waiting for certainty meant delaying the invasion by weeks, risking discovery.
Eisenhower decided: "OK, let's go." The 80% forecast was right. The invasion succeeded.
Stagg later said perfect forecasts were impossible—but actionable forecasts saved the war. Sometimes 80% certainty today beats 95% certainty too late.
You are General Eisenhower on June 4, 1944. Your meteorologist gives 80% confidence of a brief weather window on June 6. Waiting for certainty means delaying weeks and risking discovery of the invasion force.
The Tale of the Two Meta-Analyses
Two meta-analyses on the same question: "Do masks prevent COVID-19 transmission?"
The first pooled 14 observational studies: OR 0.35—masks reduce risk by 65%!
The second included only 3 RCTs: OR 0.91—no significant effect.
Same question. Same year. Opposite answers. Why?
Which studies you include determines what truth you find.
Observational Studies
Higher confounding risk
RCTs Only
Lower power, higher validity
In rapid reviews, your inclusion criteria ARE your conclusion.
Decision Tree: Observational vs. RCT Evidence
Higher certainty
State uncertainty clearly
But downgrade certainty
Module 1 Quiz
What is the most appropriate approach?
The Story of Two Cities in 1918:
In September 1918, Philadelphia and St. Louis both detected influenza cases. Philadelphia's health commissioner waited for more data before acting. On September 28, he allowed a massive parade—200,000 people. Within days, hospitals overflowed; 12,000 died in weeks.
St. Louis, with the same incomplete data, closed schools and banned gatherings immediately. Their death rate was half Philadelphia's.
The dim light of early action outperformed the bright light that came too late.
You are Philadelphia's health commissioner in September 1918. Cases of a deadly flu are appearing. A massive Liberty Loan parade is scheduled for September 28 with 200,000 expected attendees. What do you do?
The Tocilizumab Turnaround
February 2021: Multiple small trials of tocilizumab (an IL-6 inhibitor) showed no benefit. Headlines declared it useless. Some hospitals stopped using it.
June 2021: RECOVERY trial reports 4,116 patients. Mortality reduced by 14%. WHO immediately recommends it.
The small trials were underpowered. They found "no evidence of effect"—which is not "evidence of no effect." The large trial found the truth.
What is the difference between these two statements?
"No evidence of effect"
Cannot rule out benefit
"Evidence of no effect"
Confidently excludes benefit
Absence of evidence is not evidence of absence—unless your study was large enough to detect it.
Decision Tree: Is This Study Large Enough?
Trust negative results
"No effect" may be false negative
Calculate post-hoc power
The Ventilator Reversal
Initial guidance: patients with low oxygen should be intubated quickly to prevent deterioration.
April 2020: "Intubate Later"
Clinicians observed that many patients tolerated low oxygen ("happy hypoxia"). Prone positioning often prevented the need for ventilation.
Result: Practice changed mid-pandemic based on real-world observation, not RCTs. Early intubation may have caused avoidable deaths.
When RCTs are impossible, observe carefully. Document systematically. Share transparently.
Speed without rigor is recklessness.
Rigor without speed is abandonment.
Module 2: The Method
Forty hours to truth—if you know the path.
The 40-Hour Rapid Review
Step 1: Define the Question (2 hours)
The PICO Framework—Sharper Than Ever
Population: Adults hospitalized with COVID-19 (narrow!)
Intervention: Dexamethasone 6mg daily
Comparator: Standard care / placebo
Outcome: 28-day mortality (primary only)
Step 2: Search Strategy (4 hours)
DON'T: Systematic Search
MEDLINE + EMBASE + Cochrane + CINAHL + PsycINFO + Web of Science + grey literature + hand searching
Time: 40+ hours
DO: Focused Search
PubMed + Cochrane Library + one preprint server (medRxiv)
Time: 4 hours
Document what you searched AND what you deliberately excluded.
The critical shortcut:
Single screening with verification.
This saves 50% of screening time while catching most errors.
Decision Tree: Quality Assessment Shortcuts
Use standard tool (RoB 2, ROBINS-I)
Key domains only
Key domains: randomization, blinding, missing data, selective reporting
The Final Step: Synthesis
You may not have searched comprehensively. You may have missed studies. A pooled estimate from an incomplete search can create false confidence.
Prefer narrative synthesis. If you must pool, state the limitations loudly.
PRISMA-RR: Your Reporting Checklist
PRISMA for Rapid Reviews Extension (2024)
When reporting rapid reviews, you MUST include:
- Shortcuts taken: What was streamlined vs. full SR?
- Time constraints: Why was rapid review necessary?
- Databases searched: And why others were excluded
- Screener count: Single vs. dual screening
- Date of search: Evidence current as of [DATE]
- Limitations section: What may have been missed
The Tale of the Remdesivir Reversal
The FDA granted Emergency Use Authorization: "Remdesivir shortens recovery time."
The WHO recommended against it: "No mortality benefit, high cost."
Same drug. Same evidence base. Opposite conclusions.
The difference? The FDA prioritized surrogate outcomes (recovery time). The WHO prioritized patient-centered outcomes (mortality).
Decision Tree: Choosing Outcomes in Rapid Reviews
Highest certainty
Note indirectness
State explicitly
Module 2 Quiz
What screening approach is appropriate?
Module 2 Quiz (2)
According to PRISMA-RR, which is NOT required in a rapid review report?
The Prone Positioning Discovery
For decades, ICU patients with respiratory failure lay on their backs. In COVID-19, clinicians noticed something remarkable.
Turning patients onto their stomachs—proning—dramatically improved oxygen levels. No drug required. No cost. Available everywhere.
Meta-analysis of 6 RCTs: Mortality reduced by 25% in severe ARDS patients.
Sometimes the breakthrough is not a molecule. Sometimes it's a position.
The Story of the Excel Error:
In 2010, Harvard economists Reinhart and Rogoff published influential research: countries with debt above 90% of GDP had negative growth. Governments worldwide cited it to justify austerity.
Three years later, a graduate student found their Excel spreadsheet excluded five countries due to a coding error. When corrected, the 90% threshold disappeared.
One spreadsheet error, unchecked, influenced policy affecting millions. The researchers who verify are as important as those who calculate.
You are a graduate student reviewing the famous Reinhart-Rogoff paper that governments are using to justify austerity. Do you bother checking the spreadsheet of Harvard professors?
Decision Tree: Narrative vs. Meta-Analysis
Pool with confidence
"May miss studies"
Don't pool apples and oranges
Forty hours to truth—if you know the path.
Module 3: The Disaster
A flawed study in a meta-analysis poisons all.
Have you not heard the tale of Marseille?
Of the paper that changed the world—and should not have?
March 17, 2020: The Paper
Gautret et al. — International Journal of Antimicrobial Agents
Claim: Hydroxychloroquine + azithromycin clears COVID-19 virus in 100% of patients
Submitted: March 16, 2020
Accepted: March 17, 2020
Peer review time: ONE DAY
The editor-in-chief was a colleague at the same institute as the authors.
What were the flaws?
The patients who got worse were simply removed from the analysis.
And yet, consider what followed:
March 28: FDA issues Emergency Use Authorization.
April 2020: Countries worldwide stockpile the drug. India bans exports.
Result: Patients with lupus and rheumatoid arthritis—who need hydroxychloroquine to live—cannot get their medications.
The Seven Deadly Flaws
1. Non-randomized, open-label design
2. Convenience control group from different hospitals
3. Patients who worsened excluded from analysis
4. Different PCR thresholds for treatment vs. control
5. Ethics approval granted AFTER trial started
6. Children included despite exclusion criteria
7. One-day peer review by conflicted editor
The Reckoning: December 2023
It remains the most-cited COVID-19 paper to be retracted.
The damage was done in days. Correction took years.
Decision Tree: Red Flags in Rapid Evidence
Verify before sharing
Check author/editor affiliations
The Convalescent Plasma Collapse
August 2020: The FDA grants Emergency Use Authorization for convalescent plasma, citing 35% mortality reduction.
The data came from a non-randomized study of 35,000 patients. No control group. Historical comparisons only.
By February 2021, seven RCTs showed no mortality benefit. The EUA was revised. 500,000+ doses had already been given.
Consider the social media amplification:
Misinformation spreads faster than corrections. This is the asymmetry you fight.
Inoculation: The Pre-Bunking Defense
Prebunking > Debunking: Warn people about manipulation tactics before they encounter false claims.
The technique: "You may hear claims that a small study proves X works. Here's why small studies often mislead..."
The evidence: Inoculation reduces susceptibility to misinformation by 20-30% (van der Linden et al., 2020).
Inoculation Messaging: Practical Templates
Three Prebunking Approaches
1. Logic-Based: "If someone claims 100% effectiveness, ask: was there a control group? A study without comparison proves nothing."
2. Source-Based: "Be wary of studies posted online before peer review—they haven't been checked by independent experts yet."
3. Emotional-Based: "Miracle cure claims exploit our hope. Real treatments show modest benefits with honest limitations."
Use these templates in public health communications, clinician training, and press releases.
Decision Tree: Evaluating Preprint Claims
Author-editor relationship?
Never cite as definitive
Module 3 Quiz
How many red flags does this study have?
Module 3 Quiz (2)
The Gautret hydroxychloroquine study received 3.1 million Twitter engagements. The retraction notice received 83. This illustrates:
A flawed study in a meta-analysis poisons all.
Module 4: The Fraud
Beware the meta-analysis built on sand.
Have you witnessed how fraud spreads through evidence?
How one lie becomes the foundation of many truths?
The Ivermectin Phenomenon
Governments in Latin America, Africa, and India distribute millions of doses. Patients demand prescriptions. Some take veterinary formulations.
But something was deeply wrong in the data.
The Elgazzar Study: Anatomy of Fraud
Egypt, 2020: The Largest Ivermectin Trial
Dr. Ahmed Elgazzar posted a preprint claiming ivermectin reduced COVID-19 mortality by 90%.
It was included in multiple meta-analyses and contributed 12.6% of the overall effect estimate for mortality.
Then a graduate student started checking the data.
What did they find?
July 15, 2021: The study was retracted for "ethical concerns."
The Cascade Effect
| Studies Included | Survival Benefit | Significance |
|---|---|---|
| All 12 studies | 51% reduction | Significant |
| Without Elgazzar | 38% reduction | Borderline |
| Without high-bias studies | 10% reduction | Not significant |
| Only low-bias studies | 4% reduction | Not significant |
One fraudulent study changed the conclusion from "no effect" to "miracle cure."
And Nature Medicine warned:
Meta-analyses based on summary data alone are inherently unreliable.
Decision Tree: Detecting Problematic Studies
Run sensitivity analysis without it
The Lesson of Ivermectin:
Always run your meta-analysis twice:
Once with all studies. Once without the most influential.
If one study changes your conclusion, your conclusion is fragile.
The Tale of Jack Lawrence
Jack Lawrence was a master's student in London. He knew no Arabic. But he could spot duplicate rows in a spreadsheet.
He downloaded Elgazzar's data supplement. Row 148 was identical to row 11. Row 228 matched row 79. 79 patients appeared twice.
He tweeted his findings. Within days, preprint servers retracted the paper. Meta-analyses were revised. Policies changed.
One student with basic data skills did what peer review could not.
Decision Tree: Fraud Detection Checklist
(Identical means across groups)
May indicate fabrication
Deaths before trial = impossible
Check other signals
Module 4 Quiz
What should you do?
Module 4 Quiz (2)
The Elgazzar ivermectin study was detected as fraudulent because:
The TOGETHER Trial Disappointment
After small trials suggested fluvoxamine (an antidepressant) might prevent severe COVID-19, the world waited for the definitive answer.
The TOGETHER trial enrolled 1,497 patients in Brazil. Initial interim: 32% reduction in hospitalization!
Final results: only 5.1% of placebo patients were hospitalized vs 4.0% of fluvoxamine patients. The effect was smaller than hoped, and the absolute benefit was tiny.
Small absolute differences require enormous trials to detect. Most promising treatments fade when tested properly.
The Story of Power Posing:
In 2010, researchers claimed that "power poses" (standing like Superman) increased testosterone and risk-taking. The study went viral—TED talks, business books, millions of believers. Multiple studies seemed to replicate it.
Then larger, pre-registered replications failed. The original co-author publicly disowned the findings. Many small, flexible studies had found what they wanted to find.
Weak evidence multiplied felt like strong evidence—until rigorous replication revealed it was noise.
You are a journal editor receiving a large, pre-registered replication study that fails to reproduce the popular power posing effect. The original study has millions of believers and a famous TED talk.
Decision Tree: Reading a Forest Plot
Cannot exclude no effect
Higher confidence
True effect varies—explore why
Beware the meta-analysis built on sand.
Module 5: The Triumph
From protocol to saving lives: 100 days.
Now hear the story of how it should be done.
Of how speed and rigor found harmony.
The RECOVERY Trial
As hydroxychloroquine chaos spread, British researchers launched the world's largest COVID-19 treatment trial. It was designed to be fast AND rigorous.
Within 6 weeks of funding, patients were being enrolled.
How did they achieve speed without sacrificing quality?
An adaptive platform trial: multiple treatments tested simultaneously.
June 16, 2020: The Announcement
Dexamethasone Results
Ventilated patients: 29% mortality reduction (NNT = 8)
Oxygen therapy: 20% mortality reduction (NNT = 25)
No oxygen needed: No benefit (possible harm)
Cost per treatment: ~$6 for a common steroid.
And the world responded—correctly this time.
1,000,000
Lives saved globally by March 2021
22,000 in the UK alone. From a $6 drug that had been available for 60 years.
The difference was rigorous evidence, rapidly produced.
What Made RECOVERY Different?
1. Randomized controlled design (not observational)
2. Pre-specified outcomes and analysis plan
3. Massive scale (statistical power)
4. Adaptive platform (dropped arms early if no signal)
5. Simplified enrollment (one-page consent in emergency)
6. Independent data monitoring committee
The Lesson of RECOVERY:
Speed does not require cutting corners. It requires cutting waste.
Simplified consent forms. Adaptive designs. Platform trials that test multiple treatments at once.
Rigor and speed are not enemies. They are partners waiting to be introduced.
The SOLIDARITY Contrast
Small Studies Combined
Gautret: 36 patients
Elgazzar: ~400 patients (fraud)
RECOVERY/SOLIDARITY
RECOVERY: 11,500 patients
SOLIDARITY: 11,330 patients
Platform trials replace the noise of many small studies with the signal of one large one.
Decision Tree: Should You Wait for the Big Trial?
Higher quality evidence coming
Update when trial reports
Best available evidence
Module 5 Quiz
The RECOVERY trial enrolled 11,500 patients in the dexamethasone arm. What was the primary reason this was possible?
Module 5 Quiz (2)
Dexamethasone for COVID-19 saved an estimated 1 million lives globally. The drug cost per treatment was:
The Azithromycin Abandonment
The Gautret study combined hydroxychloroquine with azithromycin. Suddenly, azithromycin was everywhere—added to treatment protocols worldwide.
But azithromycin had never been tested alone. The combination was never proven. And azithromycin causes heart arrhythmias.
July 2020: RECOVERY randomizes 7,763 patients. Result: No benefit. Median hospital stay identical. Mortality identical.
Months of exposure to cardiac risk for zero benefit—because the original study was never questioned.
The Story of Tacoma Narrows:
In 1940, the Tacoma Narrows Bridge was built quickly and cheaply. Engineers skipped wind tunnel testing to save time. Four months after opening, moderate winds set the bridge oscillating. It twisted violently and collapsed—captured on film that engineering students still study.
A nearby older bridge, built slowly with extensive testing, still stands. The third lesson came later: modern bridges use rapid computational testing—fast methods that don't skip verification.
Speed without testing destroys. Testing without speed delays. Rapid testing saves.
You are an engineer in 1940 designing the Tacoma Narrows Bridge. Budget is tight and the deadline is firm. Wind tunnel testing would add weeks and cost. What do you do?
Decision Tree: Evaluating Platform Trial Arms
Don't use this treatment
Contraindicated
Update guidelines
From protocol to saving lives: 100 days.
Module 6: The Living
Living evidence serves the living.
Have you considered evidence that never stops growing?
Reviews that breathe with the epidemic itself?
The Living Systematic Review
A living review is a video: continuously updated as new evidence emerges.
In a pandemic, the photograph is obsolete before it's printed.
PAHO's Living Review: A Case Study
Pan American Health Organization, April 2020 - Present
Launched: April 2020
Interventions assessed: 305 treatments
RCTs included: 924 randomized controlled trials
Updates: 48 updates published
Sources monitored: 40+ databases including preprint servers
When should a review become "living"?
1. The topic is a priority for decision-making
2. New evidence is emerging rapidly
3. Current certainty is low
4. New evidence is likely to change conclusions
If all four are true, a living approach is warranted.
The Challenge: Keeping Reviews Alive
Decision Tree: When to Update a Living Review
Note: "No new evidence" is still information
The Tale of the Molnupiravir Momentum
October 2021: Merck announces interim results—50% reduction in hospitalization!
Stock prices soar. Countries pre-order millions of courses. Headlines celebrate.
November 2021: Final results published—30% reduction. Still significant, but...
December 2021: Updated analysis shows only 6.8% of control arm hospitalized, not 14.1%.
The denominator changed. The magic faded. Lessons: interim ≠ final.
And then came Paxlovid—
Molnupiravir
Low-risk patients
Mutagenic concerns
Paxlovid
High-risk patients
Drug interactions
Living reviews must distinguish interim from final, preliminary from confirmed.
Decision Tree: Retiring a Living Review
Archive with final date
Low certainty needs monitoring
Active evidence field
Module 6 Quiz
A study found that 65% of "living" systematic reviews on COVID-19 were never updated. This is problematic because:
Module 6 Quiz (2)
What should you do?
The Bamlanivimab Withdrawal
November 2020: FDA grants EUA for bamlanivimab, a monoclonal antibody. Early data showed promise.
March 2021: EUA revoked. Why? Variant escape. The virus mutated. The antibody no longer bound.
A living review tracking monoclonal antibodies would have flagged this in weeks. The static guidance took months to change.
In a pandemic, the enemy evolves. Your evidence must evolve faster.
The Story of Flu Surveillance:
The CDC monitors influenza through sentinel surveillance. Early systems checked hospitals weekly—too slow to catch surges. Constant real-time monitoring overwhelmed analysts with noise.
The solution: threshold-based surveillance. Track continuously, but trigger alerts only when cases exceed seasonal baselines. This "living review" approach caught H1N1 in 2009 weeks before traditional methods.
The watchman who monitors thresholds outperforms both the annual inspector and the exhausted hourly checker.
You are designing a disease surveillance system. How frequently should you review the evidence and trigger alerts?
Decision Tree: Is This New Study Significant Enough to Trigger an Update?
Certainty change = guideline change
Direction change is major
Update at next scheduled
Living evidence serves the living.
Module 7: The Quality
Transparency reveals; concealment deceives.
How do you judge a rapid review?
When time prevents perfection, what is "good enough"?
GRADE in 40 Hours: Is It Possible?
Full GRADE Assessment
Risk of bias + Inconsistency + Indirectness + Imprecision + Publication bias
Time: 8-16 hours per outcome
Rapid GRADE
Risk of bias + Imprecision + Major concerns only
Time: 2-4 hours per outcome
Focus on the domains most likely to affect certainty.
The Non-Negotiables
1. Document everything: What you searched, what you excluded, why
2. State limitations explicitly: Not buried in text—front and center
3. Distinguish rapid from systematic: Never pretend comprehensiveness
4. Date-stamp everything: Evidence current as of [DATE]
5. Plan for updates: A rapid review is not the final word
The Story of Climate Uncertainty:
IPCC climate reports could claim certainty to seem authoritative. Instead, they explicitly quantify uncertainty: "likely (66-100%)", "very likely (90-100%)", "virtually certain (99-100%)".
When the 2021 report stated warming is "unequivocal" but ice sheet collapse timing is "low confidence," policymakers knew exactly where to trust the map—and where dragons might lurk.
This transparency increased, not decreased, the reports' credibility and policy impact.
You are writing the IPCC climate report. Should you claim certainty to seem more authoritative, or acknowledge uncertainty where it exists?
How to READ a Rapid Review (Consumer Guide)
Five Questions to Ask Before Trusting
- When was the search done? — If >3 months old in a fast-moving field, it's outdated
- What databases were searched? — PubMed alone may miss 30% of relevant studies
- Was screening single or dual? — Single screening may miss 5-10% of relevant studies
- Are limitations stated explicitly? — Hidden limitations = hidden agendas
- Is there a plan for updates? — Rapid reviews should be living or have update triggers
Shared Decision-Making Under Uncertainty
1. Name the uncertainty: "We have some evidence, but it's early/limited/conflicting."
2. Quantify when possible: "Studies suggest 20-40% might benefit, but we're not sure."
3. Explain what we're watching: "A larger trial reports next month."
4. Offer structured choice: "Given this uncertainty, we can try it or wait. What matters most to you?"
The Conversation Script
"I've looked at the latest evidence. Here's what we know: this treatment may help—some studies show a 30% improvement. But the studies are small, and some showed no benefit.
"I want to be honest with you. If we wait 2 months, we'll have better data from a larger trial. But I also understand you're suffering now.
"What would help you make this decision? Would you like to try it now knowing the uncertainty, or would you prefer to wait for stronger evidence?"
Shared decision-making is not about having all the answers. It's about sharing what we don't know honestly.
Network Meta-Analysis: A Word of Caution
What Can Go Wrong:
• Transitivity violation: Populations in A-vs-B trials differ from B-vs-C trials
• Inconsistency: Direct and indirect evidence contradict
• Sparse networks: Single-study connections carry entire inference
In Rapid Reviews: Network meta-analysis typically requires more time and expertise than available. When used, state transitivity assumptions explicitly. When skipped, explain why indirect comparisons weren't attempted.
Decision Tree: Should You Trust This Rapid Review?
Apply with stated caveats
Cannot assess currency
Pretending comprehensiveness
The Tale of the Vitamin D Delusion
Dozens of observational studies showed: low vitamin D → worse COVID outcomes.
Meta-analyses of these studies showed: "40% mortality reduction with supplementation!"
Then came the RCTs. One after another: no benefit. No benefit. No benefit.
The observers had found correlation. Sick people stay indoors → low vitamin D AND worse outcomes. The sun was the confounder, not the cure.
Module 7 Quiz
A rapid review should ALWAYS include:
Module 7 Quiz (2)
What should you conclude?
Module 7 Quiz (3)
Vitamin D observational studies showed 40% mortality reduction. RCTs showed no benefit. This discrepancy is best explained by:
Module 7 Quiz (4)
Inoculation messaging (prebunking) is effective because it:
Module 7 Quiz (5)
The best approach is:
The Nursing Home Tragedy
Vaccines arrived. Who should get them first? The evidence was clear: nursing home residents had the highest mortality.
But some argued for essential workers. Others for healthcare staff. The debate delayed rollout.
Result: 60,000+ US nursing home deaths occurred after vaccines were available but before rollout completed.
Rapid evidence synthesis on prioritization could have saved weeks. Weeks meant thousands of lives.
The Story of Challenger:
On January 27, 1986, engineers told NASA the Challenger's O-rings might fail in cold weather. But they had incomplete data—only some cold-launch records, ambiguous failure patterns. Manager Larry Mulloy asked for proof of danger; engineers could only show uncertainty.
Mulloy launched. The O-rings failed at 73 seconds. Later analysis showed the data clearly predicted failure—but only when analysts knew what to look for.
Partial evidence, dismissed as insufficient, contained the warning that could have saved seven lives.
You are an engineer at NASA on January 27, 1986. You have incomplete data suggesting O-rings might fail in cold weather. The launch is scheduled for tomorrow in freezing conditions. Your manager demands proof of danger.
Decision Tree: Assessing Funding Source Bias
Outcome switching limited
Selective reporting possible
But still assess RoB
The Preprint Paradox
December 2020: Pfizer vaccine efficacy data appeared on medRxiv before publication. Within hours, regulators had begun review. The world could plan.
January 2021: A preprint claimed vaccines caused deaths in nursing homes. It was methodologically flawed. It went viral. Vaccine hesitancy increased.
Same platform. Different outcomes. Preprints accelerate both truth and lies. The reader bears the burden of judgment.
Decision Tree: When Is a Preprint Acceptable to Cite?
"Preprint, not peer-reviewed"
Wait for verification
No justification for preprint
Regional Adaptation: MENA Context
During COVID-19, Gulf Cooperation Council (GCC) nations faced a critical gap: limited local rapid review capacity meant reliance on Western evidence that sometimes missed regional factors—Ramadan fasting, multi-generational households, healthcare worker ratios, and different population demographics.
Lesson: Adapt the 40-hour protocol to your context. Arabic search terms in regional databases. Local implementation considerations. Regional case studies for training.
Key Adaptations:
• Include regional databases (IMEMR, EMBASE Arabic)
• Consider local healthcare infrastructure capacity
• Adapt communication for local stakeholders
Transparency reveals; concealment deceives.
Final Assessment
You have journeyed through crisis and truth.
The Five Principles
1. Speed without rigor is recklessness.
2. Rigor without speed is abandonment.
3. A flawed study in a meta-analysis poisons all.
4. Transparency reveals; concealment deceives.
5. Living evidence serves the living.
Final Quiz (1/5)
The Gautret hydroxychloroquine study was retracted primarily because:
Final Quiz (2/5)
When the fraudulent Elgazzar ivermectin study was removed, the mortality benefit:
Final Quiz (3/5)
The RECOVERY trial achieved results in approximately:
Final Quiz (4/5)
Of 97 "living" systematic reviews on COVID-19, what percentage were never updated?
Final Quiz (5/5)
In a rapid review, the single most important quality requirement is:
You have completed the journey.
Go forth and synthesize with wisdom.
Remember the lessons of 2020:
Speed without truth killed. Truth without speed abandoned.
The balance is the art.