Clinical Study Design Pitfall #2: Insufficient Blinding

by | Jun 14, 2022 | Study Execution

When bias creeps into a clinical study, the result can be a dubious reception by regulators and the scientific community. The solution is careful study design to minimize bias. In this blog post – a continuation of our series on common pitfalls in clinical study design — we’re sharing insights that will help you make smart decisions to avoid the common pitfall of insufficient study blinding.

Overview of Blinding

Blinding is the policy of withholding certain information from study participants whose beliefs, reactions, and subsequent decisions might be influenced by that knowledge. Blinding helps avoid bias in clinical judgment. Now, “bias” is a loaded concept, and the word itself can be a source of misunderstanding. Here, we mean bias in its statistical sense – referring to any possible influence on judgment. We’re not talking about misconduct and manipulation of data. Bias is not wrongdoing; it’s an undesirable consequence of human nature and too much information.

Before we focus on the logistics of study blinding, it would be helpful to consider these questions about your study: What is the structure of the trial (e.g., single arm, randomized)? Are your endpoints objective or subjective? Who is collecting and evaluating the data? What is the nature of the intervention? What staffing resources will be available? 

Deciding Who Should be Blinded to What

Referring to a study as “blinded” is common practice, but this doesn’t tell the full story unless it specifies who was blinded. Even the term “double-blinded” – which usually, but not always, means that patients and investigators are blinded – isn’t fully precise and informative. We need to think about exactly which study participants are blinded, not just how many, because different study designs and endpoints expose different sources of bias.

As many as a dozen distinct groups might be blinded, including investigators, patients, members of safety committees, laboratory technicians, sponsor personnel, and data management/analysis staff. For many studies, it boils down to keeping information about the choice of treatment from the patient who is being treated, the clinical staff performing the treatment, and/or the assessors who evaluate the outcomes of the treatment. That might sound simple, but let’s drill deeper into the complexity of these decisions.

A Deeper Look at Complex Blinding Decisions

Since blinding primarily involves the type of treatment, the strategy is most critical for studies that have multiple treatment options, such as randomized trials. The patient and clinical site staff cannot be blinded when there is only one treatment option in nonrandomized studies, although it might be possible to blind an independent outcome assessor to the treatment.

For a randomized trial, the first step is to assess which endpoints would benefit most from blinding. In general, the more subjective the endpoint, the more blinding matters. For example, an objective endpoint, such as all-cause mortality, does not typically require blinding because making the determination that a subject is clinically dead does not depend in any way on knowledge of the treatment.

Most endpoints, though, involve subjectivity and potential for bias that could be minimized with blinding. For example, an imaging-based endpoint, such as coronary or peripheral angiographic restenosis, can be assessed by an independent core laboratory. While the independence of the lab adds a layer of objectivity simply by virtue of distance (physical and psychological) from the treatment, this itself is not blinding. A better strategy is to blind the core lab by not providing – or by redacting, if necessary – information about the treatment that was delivered.

Clinician-assessed endpoints that involve judgment, such as the modified Rankin scale for ischemic stroke, might also be influenced by knowledge of prior treatment. Much as with the core lab example, bias can be minimized by having a staff member who was uninvolved with and unaware of the nature of the treatment perform the assessment. This may be logistically more difficult, yet still quite practical.

Patient-assessed endpoints can be the most difficult to blind. A particularly challenging example is studies that evaluate pain. Since only the patients can tell how much pain they are experiencing, a pain endpoint is necessarily patient-reported and, therefore, in danger of bias. If patients know their treatment, the potential exists to confuse placebo effect (or Hawthorne effect) with true treatment effect.

So, how can the patient be blinded? This is especially challenging in medical device trials, where it is not possible to simply manufacture two different pills (active and placebo) that appear identical, as is done in the pharmaceutical world. If the treatment and control groups both involve similar intervention – say, a comparison of two different left atrial appendage closure devices – then blinding the patient is readily feasible. Many medical device studies, however, are not this straightforward.

When a study compares a device vs. medical management or a device vs. no device, blinding gets tricky. This is especially true if the device involves invasive intervention, like an implantable electrical stimulator for pain. In this case, the most common way to blind the patient is to perform a sham procedure or turn the device off in the control patients. The notion of performing a procedure on patients who are not planned to receive therapy raises ethical questions. This dilemma is often resolved by offering the option to turn on the device, or to receive the “real” procedure, after enough time has passed to assess treatment effect.


Insufficient study blinding is a pitfall because, unfortunately, there is no reliable analytical method to correct for bias once it appears. Often, we can’t even be sure whether it’s there at all. A well-designed blinding strategy will minimize bias in your study, increasing the chances that your data will be well-received by regulators and the scientific community.

You May Also Like…