Guest Column | December 3, 2025

CROs In Early Drug Discovery, Part 2: Keeping Pace And Protecting Quality

By Simon Cocklin, Ph.D., director of therapeutic discovery, Chan Soon-Shiong Institute for Molecular Medicine at Windber

Timeline-GettyImages-1711504882

In part 1, I outlined the basics of a successful CRO partnership: selecting the right partner, setting clear scientific expectations up front, and writing a precise SOW that eliminates ambiguity before any experiments start. These steps create the framework; however, they do not, on their own, guarantee the work will proceed smoothly.

Part 2 continues from where part 1 left off. Here, the focus shifts from setting up the collaboration to managing it in real time. Once experiments start, momentum becomes delicate, communication pace matters, and data quality becomes the biggest risk. This section outlines practical practices for keeping outsourced work on track — how to sustain momentum, prevent operational drift, and safeguard data quality from the outset.

The first place these pressures surface is communication.

Maintaining Communication And Preventing Delays

In my experience, even the strongest SOW fails without intentional, structured communication. Most "CRO failures" are not scientific — they are operational. Minor misunderstandings build up, timelines slip, and the sponsor only finds out after weeks of silence. Daily communication is where projects succeed or fail.

Regular Check-Ins And Progress Updates

A 20- to 30-minute call or video chat every two weeks is one of the most effective habits in outsourced discovery. Weekly meetings rarely create enough meaningful progress; a bi-weekly schedule allows the CRO to gather real data while still catching deviations early.

These meetings ensure both sides stay aligned on what has been accomplished, what worked, what did not, and what is coming up next. The goal is not bureaucracy — it is to identify issues early.

A good bi-weekly call covers:

  • assays run since the last meeting and any initial observations
  • unexpected events (protein loss, instrument problems, plate failures)
  • upcoming reagent needs (proteins, compounds, consumables)
  • emerging data trends or red flags
  • timeline adjustments.

I approach these meetings as structured conversations rather than aimless discussions. Agendas should be brief, precise, and distributed in advance. If a misunderstanding occurs — even a minor one — I promptly send a short email summarizing the agreed-upon change. This simple step prevents the "I thought you meant X" issue that can derail timelines.

If an issue takes more than two emails to resolve, switch to a call. Long email threads are where alignment often breaks down.

Using AI Notetakers To Eliminate Drift

One of the simplest modern safeguards against communication drift is to use an AI notetaker during all recurring CRO calls. This is not about replacing human judgment — it is about keeping a verbatim, time-stamped record of decisions, instructions, and action items.

AI notetakers prevent three chronic failure modes in outsourced work:

  1. Memory drift: By the next meeting, someone inevitably forgets whether a buffer change was approved, whether a repeat was needed, or if a protein batch failed QC. A searchable transcript eliminates ambiguity.
  2. Silent deviations: CRO personnel rotate, internal notes become condensed, and the reasoning behind decisions gradually fades. A verbatim record prevents the "telephone game" that diminishes accuracy over time.
  3. Action-item clarity: AI summaries precisely detail who agreed to what and by when. If a timeline slips, you can swiftly identify the source of the delay.

The discipline is simple: Use the notetaker for every bi-weekly call and upload the transcript and summary to the shared project folder.

Your email summary serves as the high-signal snapshot; the transcript serves as the authoritative record in the event of any disputes later. When used consistently, AI documentation reduces misunderstandings, speeds up onboarding, and prevents decisions from falling apart simply because someone took incomplete notes.

Shared Reference Documents

Using a shared document platform (such as OneDrive, SharePoint, Dropbox, or Benchling) helps prevent version drift, one of the most avoidable causes of CRO errors. Protocols and reagent inventories should be stored in a single shared location with version control. Similarly, plate maps, QC criteria, and analysis templates should also be kept in the same way.

What I always centralize:

  • Current assay protocol (with revision history)
  • Reagent lists with batch numbers, concentrations, and expiry
  • Analysis templates (especially for enzymology, SPR/BLI, and cell-based assays)
  • Data format requirements
  • QC expectations
  • Contact sheets

The goal is simple: When the CRO sends data, I should already have all the necessary contextual information to interpret it. No scavenger hunts. No missing protocol details. No "we forgot to mention we changed the buffer."

Clear Escalation Paths

Even with the best-case planning, things will go wrong — cell lines die, protein expression fails, and instruments break. Success depends on how quickly and openly the issue is escalated.

At project initiation, I always define:

  • who at the CRO can authorize a protocol change
  • who can approve extra time or additional runs
  • who is responsible for raising QC failures
  • who at my institution must be notified before project direction changes.

This is where a strong project manager — ideally one with scientific training — is crucial. A PM who understands the data can appropriately escalate issues rather than just relay messages.

I explicitly outline: "If X fails, notify Y within Z hours."

Without this, you face the dreaded silent delay — a week of inactivity because both sides assume the other is taking care of the issue.

Every escalation call concludes with a short written summary: actions, owners, and timelines. The practice of documenting decisions is one of the strongest indicators of whether an outsourced project remains on schedule.

Ensuring Data Quality And Reproducibility

If there is one area where outsourced work can quietly weaken an entire discovery effort, it is data quality. Most sponsors assume CRO-generated data will be clean, well controlled, and reproducible. In practice, this varies greatly. The worst outcomes I have seen have not been subtle: no replicates, missing QC, no controls, dead proteins, and — most frustrating of all — no consultation before running an experiment incorrectly. Data like that cannot be salvaged, and it wastes both time and budget.

Establishing The Minimum Standard

My reproducibility expectations rely on whether I can verify the result independently:

  • If I can reproduce the experiment in-house (e.g., I have the instrument), duplicate measurements may be acceptable.
  • If I cannot reproduce it in-house, I require triplicates and orthogonal confirmation whenever possible.

This is not overkill; it is protection. The last thing I want is to publish or advance a hit only to find out later that the signal never existed. Orthogonal assays — such as pairing an enzymatic readout with a binding measurement — significantly increase confidence without doubling the cost.

Many CROs overlook replicates unless explicitly instructed to do so. I include them up front in the SOW to prevent "repeat the experiment" from being seen as a “scope change."

Protein QC: The Foundation That Determines Everything Downstream

The most avoidable mistake in CRO-driven discovery is creating assays using low-quality protein. My basic requirement is straightforward: If you are producing a protein for me, it must be crystallography-grade and active.

What that means depends on the protein type:

  • For enzymes: measurable enzymatic activity using a benchmark substrate
  • For receptors or hub proteins: confirmation of binding to a known ligand, partner, or an antibody to a discontinuous epitope
  • For any protein: a proper purification workflow — tag purification → polishing step (e.g., ion exchange) → SEC for monodispersity

If the protein cannot bind its known partner, it is not folded correctly. If it lacks activity, it should not be used for assay development, hit screening, structural research, or any downstream applications. This requirement is strict and prevents weeks of wasted effort.

Raw Data Transparency: The Nonnegotiable Requirement

At a minimum, I expect every CRO to provide:

  • .xlsx files (not PDFs or screenshots)
  • instrument-native files (Biacore, Octet, plate reader exports)
  • image files with metadata (microscopy, gels, Westerns, etc.)
  • complete plate maps and full experiment annotations.

If the experiment mattered enough to run, the raw data matters enough to return.

Summaries are helpful, but they cannot confirm a result on their own. Without raw curves, sensorgrams, or progress traces, it is challenging to spot artifacts, drift, or analysis errors.

The Worst Failure And Why It Still Matters

One of the most instructive failures I have experienced was an enzyme-linked immunosorbent assay (ELISA) project in which the CRO failed to run any controls and conducted all concentrations at full saturation. There was no usable quantitative or qualitative data to salvage, and they refused to redo the work.

It reinforced a simple truth: If you do not define QC, replicates, and controls in advance, you are gambling with your data.

Most CRO data failures occur not because of incompetence but because of assumptions — assumptions the sponsor never addressed because expectations were never documented.

The Sponsor's Responsibility

High-quality CRO data is absolutely achievable, but it requires:

  • defining acceptance criteria early
  • specifying replicates, controls, and orthogonal methods
  • insisting on raw data
  • verifying protein quality before assay development
  • treating QC as integral to the science, not an optional add-on.

CROs are fully capable of generating decision-grade data — but only when the sponsor provides the scientific context, quality bar, and structure that prevent shortcuts.

Conclusion

Maintaining momentum in outsourced discovery is intentional — it stems from structured processes. Clear communication routines, well-set QC expectations, transparent data formats, and early agreement on timelines ensure execution proceeds as planned in the SOW. When these systems are in place, daily progress remains visible, small issues are identified early, and data quality stays consistent as the work expands.

Part 2 concentrated on these execution disciplines: maintaining information flow, preventing quiet drift, and safeguarding data quality once experiments start. But even with effective communication and robust QC practices, outsourced work inevitably faces scientific and operational challenges.

Part 3 directly addresses those realities — how to handle failures, interpret unexpected results, ensure reproducibility, and apply structured troubleshooting when things don't go as planned. Together, parts 1–3 create a complete workflow: setting up the collaboration, executing it effectively, and quickly and thoroughly recovering when problems arise.

Read part 3 of this series.

About The Author

Simon Cocklin, Ph.D., is the founding director of therapeutic discovery at the Chan Soon-Shiong Institute of Molecular Medicine at Windber (CSSIMMW), where he leads translational drug discovery projects in immuno-oncology and fibrosis. At CSSIMMW, he is establishing a new Therapeutics Discovery Department and building scientific and infrastructural capabilities to support early-stage drug development aligned with the institute's goals.

Cocklin is the chief scientific advisor for Regenova Pharmaceuticals, an early-stage biotech that utilizes multi-omics and AI to develop antibodies against infectious diseases and cancer targets. He is also the cofounder and co-CEO of Bespoke Patient Solutions, LLC.

He previously held faculty and leadership roles at Drexel University College of Medicine, where he led NIH-funded research programs focused on drug discovery for HIV-1, infectious diseases, and oncology.