CT.gov Completion-Delay Debt
2026-03-29 | full-registry ct.gov audit | plots, figures, and e156 bundle
Series
Delay Project

Do short-cycle trials actually report faster on CT.gov?

A standalone public project on registration-to-completion timing, showing that fast-cycle studies can carry the heaviest reporting debt.

0 years 85.7%
6-10 years 57.6%
Ghosts 54.1%
Treatment improves with lag

Project

This page takes the temporal-trend logic from your evidence-decay work and turns it into an operational timing question: what happens when a trial moves from first submission to completion very quickly.

The shortest submission-to-completion intervals are the quietest part of the older CT.gov record.

Completion-delay comparison

A standalone E156 project on how registration-to-completion timing maps onto older-study hiddenness and fast-cycle reporting debt.

0 years
78,719
Fast-cycle studies
0-year no results
85.7%
2-year no-results
6-10-year no results
57.6%
Longer-cycle benchmark
11+ visible
28.4%
Small long-lag tail
No-results by delay
SUBMISSION-TO-COMPLETION DELAY2-year no-results rate by registration-to-completion interval0 years85.7%1 year72.6%2-3 years66.0%4-5 years61.6%6-10 years57.6%11+ years48.0%
The same-year bucket is visibly worse than every longer-delay bucket in the series.
This is the operational mirror of evidence-decay work: not older evidence rotting, but fast-cycle studies failing to become visible quickly.
Read Across Projects

Across The Series

The split projects are meant to be read together because each isolates a different dimension of registry opacity rather than forcing every question into one leaderboard.

Industry
CT.gov Industry Disclosure Gap

Industry-focused missing-results stock, sponsor backlogs, and structural sparsity inside CT.gov.

Sponsor Classes
CT.gov Sponsor-Class Hiddenness

Sponsor-class comparisons on rate, stock, and structural hiddenness rather than one flattened ranking.

Phases
CT.gov Phase Reporting Gap

Phase-by-phase disclosure gaps showing how silence changes along the development pathway.

Structural
CT.gov Structural Missingness

Field-level missingness across publication links, IPD statements, descriptions, and locations.

Visibility
CT.gov Evidence Visibility Gap

Results-plus-publication visibility states showing how many older trials are fully visible, partly visible, or ghosted.

Cohorts
CT.gov Completion Cohort Debt

Completion-era reporting debt showing how older eligible cohorts drift on no-results and ghost-protocol rates.

Conditions
CT.gov Condition Hiddenness Map

Keyword-classified therapeutic-area hiddenness mapping across common condition families.

Concentration
CT.gov Sponsor Backlog Concentration

Concentration and inequality analysis showing how much unresolved stock sits inside a thin sponsor slice.

Rule Eras
CT.gov Rule-Era Reporting Gap

Policy-era comparisons across pre-FDAAA, FDAAA, and later CT.gov completion cohorts.

PubMed Audit
CT.gov Publication Undercount Audit

Sample-based external PubMed NCT audit testing how often CT.gov no-link records hide an external paper trail.

Oncology
CT.gov Oncology Hiddenness

Oncology-specific CT.gov hiddenness showing where cancer-trial stock, phases, and sponsors still go quiet.

Cardiovascular
CT.gov Cardiovascular Hiddenness

Cardiovascular CT.gov hiddenness showing how heart and vascular studies remain quiet across major phases and sponsors.

Metabolic
CT.gov Metabolic Hiddenness

Metabolic CT.gov hiddenness across obesity, diabetes, and related trial portfolios with large late-phase and NA stock.

Size
CT.gov Enrollment-Size Gap

Enrollment-size gradients showing how older small trials remain much quieter than larger registered studies.

Geography
CT.gov Geography-Scale Visibility

Site and country footprint analysis showing how larger trial geographies map onto much better public visibility.

Purpose
CT.gov Design-Purpose Hiddenness

Primary-purpose and allocation analysis showing which trial intents remain most obscured on CT.gov.

Architecture
CT.gov Trial-Architecture Gap

Arm-count and intervention-count analysis showing simpler trial architectures are often the quietest.