← allmeta

Workflow · PROGRESS framework

Prognostic review

Systematic reviews of prognostic factors, prediction models, and treatment-effect modifiers. The methods diverge from intervention reviews at almost every step — the question (PICOTS), the appraisal (QUIPS / PROBAST / CHARMS), and the synthesis (calibration, discrimination, RMST). Defaults below follow the PROGRESS framework (Hemingway 2013; Riley 2019).

  1. Decide the prognostic question type

    Four PROGRESS types: (1) overall prognosis (PROGRESS-1), (2) prognostic factor research (PROGRESS-2), (3) prognostic-model development/validation (PROGRESS-3), (4) stratified medicine — predicting differential treatment effects across patient subgroups (PROGRESS-4). Each demands a different appraisal tool and synthesis. Frame the question as PICOTS — Population, Index, Comparator, Outcomes, Timing, Setting.

    Four research types: Hemingway H et al. BMJ 2013;346:e5595 (PROGRESS-1); Riley RD et al. PLoS Med 2013;10:e1001380 (PROGRESS-2); Steyerberg EW et al. PLoS Med 2013;10:e1001381 (PROGRESS-3); Hingorani AD et al. BMJ 2013;346:e5793 (PROGRESS-4). PICOTS framework: Riley 2013 (PROGRESS-2) + AHRQ Methods Guide. PICOTS replaces PICO because timing of outcomes drives both eligibility and effect estimation.

  2. Search with prognostic-specific filters

    Use validated prognostic search filters (Geersing 2012 for prediction models; Wong 2003 for prognostic factors). Index terms: cohort, follow-up, predictor, calibration, c-statistic, discrimination.

    Generic intervention filters miss most of the prognostic literature; published filters double precision/recall.

  3. Screen & extract with CHARMS

    Use the CHARMS checklist (Moons 2014) for prediction-model studies — 11 domains covering source of data, participants, predictors, outcomes, sample size, missing data, and model performance. For prognostic-factor reviews, extract effect estimates (HR, OR, RR) with CI, adjustment set, and follow-up time.

    Moons KGM, et al. PLoS Med 2014;11:e1001744. Extraction must capture the adjustment set — confounding control IS the prognostic-factor signal.

  4. Risk of bias — QUIPS or PROBAST

    QUIPS (Hayden 2013) for prognostic-factor studies — 6 domains: study participation, attrition, prognostic factor measurement, outcome measurement, study confounding, statistical analysis & reporting. PROBAST (Wolff 2019) for prediction-model studies — 4 domains and 20 signalling questions in total (Participants 2, Predictors 3, Outcome 6, Analysis 9).

    Choose by question type, not by study design. Generic RoB tools (RoB 2, ROBINS-I) miss prognostic-specific concerns like outcome-defining variable measurement.

    QUIPS / PROBAST are the right tools here but are not yet bundled in allmeta — record domain ratings in a spreadsheet and visualise the per-domain matrix with RoB Traffic Light. Avoid using ROBINS-I as a substitute: it appraises non-randomised intervention studies, not prognostic-factor or prediction-model studies, and misses prognostic-specific concerns such as outcome-defining-variable measurement.

  5. Synthesise the right summary measure

    Prognostic-factor reviews pool adjusted HRs/ORs across studies — random-effects on the log scale, REML τ² with HKSJ correction. Prediction-model reviews pool calibration (O/E ratio, calibration slope) and discrimination (c-statistic) — Riley 2017 / Debray 2019. State whether you pooled adjusted vs unadjusted estimates separately.

    Mixing adjusted and unadjusted estimates in a single pool is the most common methodological error in prognostic-factor reviews. Stratify by adjustment, then narrative.

  6. Grade certainty & report

    GRADE for prognosis differs (Foroutan 2020): start at high certainty for phase-1/2 prognostic studies and apply 5-domain reasoning. Render PRISMA-2020 flow. Report adjusted vs unadjusted estimates separately and state the timing of the prognostic-factor measurement.

    Foroutan F, et al. J Clin Epidemiol 2020;121:62-70. GRADE for prognosis is its own framework — do not import the intervention version uncritically.

Defaults follow the PROGRESS framework (Hemingway 2013, Riley 2013, Riley 2019, Foroutan 2020). For prognostic prediction-model reviews, also consult the TRIPOD-Cluster reporting guideline (Debray 2023). Need a full systematic review instead? See systematic review. Need it fast? See rapid review. Full 26-course collection.