40% Sites fail to meet enrolment targets
$50K Avg cost per underperforming site
6–8 mo Typical delay from site restart
11% Sites never enrol a single patient

The feasibility problem no one owns

Site feasibility is supposed to be the safeguard that prevents sponsors from activating sites that cannot deliver. In practice, it has become a paperwork exercise. Questionnaires are completed by site coordinators under pressure to say yes. CROs, incentivised by site activation metrics, pass them upstream with minimal challenge. Sponsors, eager to hit startup timelines, approve them. Everyone moves forward — and then the enrolment numbers arrive months later, revealing that 40% of sites will not hit their targets. Some never screen a single patient.

This is not a staffing problem or a training problem. It is a structural failure in how feasibility data is collected, verified, and acted upon.

Why sites overpromise — and the data proves it

When a site claims it can enrol 15 patients over 12 months, that number is rarely grounded in patient registry data, historical screen failure rates, or competing trial activity in the same therapeutic area. It is an estimate — often aspirational — from a principal investigator who wants the trial on their portfolio.

Industry data consistently shows the gap between projected and actual enrolment. Sites in Phase 3 oncology trials, for example, typically deliver 50–60% of their initial enrolment forecast. In rare disease studies, the shortfall is even steeper because the patient pool is smaller and more geographically dispersed than sites anticipate.

The root causes cluster into five predictable patterns:

  • No competing trial analysis. Sites rarely account for concurrent protocols targeting the same patient population. A site running three overlapping oncology trials will cannibalise its own recruitment pipeline, but feasibility forms seldom capture this.
  • Historical enrolment data is self-reported and unverified. Sites reference past performance without normalising for protocol complexity, patient burden, or changes in investigator capacity since the last trial ended.
  • PI availability is assumed, not confirmed. A named investigator may have the right publication record, but their actual weekly availability to screen and consent patients is rarely quantified. Sub-investigator coverage is even less scrutinised.
  • Infrastructure gaps surface too late. Cold chain capacity, pharmacy capabilities, imaging equipment, and dedicated research nursing staff are confirmed during site initiation — not during feasibility. By then, the site is already activated on paper.
  • Regulatory and ethics timelines are optimistic. Sites in emerging markets often cite best-case approval timelines. Real-world ethics committee delays of 3–6 months are common, and they compress the already-tight enrolment window.

The cost of underperformance compounds fast

An underperforming site is not neutral — it is actively expensive. Sponsor costs per activated site typically range from $30,000 to $50,000 before the first patient is screened, covering start-up visits, regulatory submissions, IRB fees, and site initiation. When a site delivers zero patients, that investment is written off entirely. When it delivers half the target, the per-patient cost doubles and the overall trial timeline extends.

The downstream effects are worse. Underperforming sites trigger protocol amendments to redistribute enrolment targets, which require regulatory notifications. They create uneven data quality across regions. They force monitors to spend disproportionate time on sites that are not producing, diverting attention from high-performing sites that need support to scale further. And when sponsors eventually replace underperforming sites, the replacement cycle adds 6–8 months to enrolment — if new sites can be found at all.

How to fix feasibility: five actionable steps

The solution is not a longer questionnaire. It is a fundamentally different approach to evidence collection and decision-making during site selection.

  • 1. Require EHR-derived patient counts, not estimates. Ask sites to run a query against their electronic health records for patients matching the protocol's key inclusion criteria in the past 12 months. A site that can demonstrate 40 eligible patients has a very different risk profile from one that estimates 40 based on gut feel. Build this requirement into the feasibility form and reject responses that cannot provide data-backed counts.
  • 2. Mandate a competing trial registry check. Cross-reference ClinicalTrials.gov and local registries for active trials at each site that target the same indication. If a site is already running two trials for the same patient population, discount its enrolment projection by at least 30% — or exclude it entirely.
  • 3. Score PI capacity, not just PI credentials. Replace the "named investigator" checkbox with a capacity assessment: how many active trials is this PI currently running? How many patients are they personally consenting per month? What is the sub-investigator coverage plan? A principal investigator spread across five protocols is a risk, no matter how eminent.
  • 4. Move infrastructure verification to feasibility, not initiation. Send a pre-feasibility checklist covering cold chain, pharmacy, imaging, staffing, and visit frequency capacity before the site is even considered. This takes two hours of a clinical research associate's time and eliminates a quarter of downstream surprises.
  • 5. Build a site performance database and use it. Track actual versus projected enrolment for every site you activate. After two or three studies, you will have a predictive model far more reliable than any feasibility questionnaire. Sites that consistently overpromise become easy to flag. Sites that quietly outperform become your go-to network.

The vendor connection

Site feasibility is ultimately a vendor capability. Whether you work with a full-service CRO, an FSP provider, or a site network, the quality of your feasibility data depends on how your vendor collects, challenges, and presents it. CROs that rely on volume-based site activation — more sites, faster — are structurally incentivised to be optimistic about site capacity. CROs that invest in evidence-based feasibility take longer to activate sites but deliver higher enrolment efficiency and fewer restarts.

When evaluating CROs, ask specifically about their feasibility methodology. Do they use EHR data? Do they verify competing trial activity? Do they track historical accuracy of site projections? The answers tell you more about operational quality than any capability deck.

Evaluate CROs by their site selection track record, not just their bid response.

Browse Vendors More Insights