Mahmood Ahmad
Tahir Heart Institute
author@example.com

CT.gov Stopped-Trial Disclosure Gap

How much worse do stopped trials look on ClinicalTrials.gov than completed trials once older closed interventional studies are grouped by final status? We analysed 249,507 eligible older closed interventional studies from the March 29, 2026 full-registry snapshot and isolated completed, terminated, withdrawn, and suspended records. The project compares two-year no-results rates, ghost-protocol rates, visible shares, and reason-missing contrasts across final statuses and stopped-study subgroups. Withdrawn studies reach a 100.0 percent no-results rate and an 81.9 percent ghost-protocol rate. Suspended studies reach 99.3 percent no results, terminated studies 58.3 percent, and stopped studies with missing termination reasons rise to 82.1 percent no results. Stopping a trial does not merely change status; it sharply deepens the risk that the public record stays silent or structurally thin. Especially when reason fields are already absent and final statuses are not completed. Final-status labels and missing reason fields are registry entries and do not adjudicate operational history or legal reporting obligations.

Outside Notes

Type: methods
Primary estimand: 2-year no-results rate across final-status groups among eligible older CT.gov studies
App: CT.gov Stopped-Trial Disclosure Gap dashboard
Data: 249,507 eligible older closed interventional studies grouped by final status and stopped-study reason fields
Code: https://github.com/mahmood726-cyber/ctgov-stopped-trial-disclosure-gap
Version: 1.0.0
Validation: FULL REGISTRY RUN

References

1. ClinicalTrials.gov API v2. National Library of Medicine. Accessed March 29, 2026.
2. Zarin DA, Tse T, Williams RJ, Carr S. Trial reporting in ClinicalTrials.gov. N Engl J Med. 2016;375(20):1998-2004.
3. DeVito NJ, Bacon S, Goldacre B. Compliance with legal requirement to report clinical trial results on ClinicalTrials.gov: a cohort study. Lancet. 2020;395(10221):361-369.

AI Disclosure

This work represents a compiler-generated evidence micro-publication built from structured registry data and deterministic summary code. AI was used as a constrained coding and drafting assistant for interface generation, packaging, and prose refinement, not as an autonomous author. The analytical choices, interpretation, and final outputs were reviewed by the author, who takes responsibility for the content.
