Ships to Shots – The Remarkable Evolution of Clinical Trials
By Sneha Das, C2ST Intern, University of Illinois Urbana-Champaign
Clinical trials have become a widely recognized household term, particularly after the COVID-19 pandemic. The World Health Organization defines clinical trials as research that evaluates the human health outcomes of new tests and treatments like medicines, vaccines, medical devices, and surgical or behavioral procedures. This blog will take you on a fascinating journey of how clinical trials have developed over time.
To trace the origins of clinical trials, we must journey back to 1747 when James Lind, a Scottish physician, conducted the first-ever clinical trial. He served aboard the HMS Salisbury, a sizable vessel tasked with patrolling the English Channel. After eight weeks at sea, the crew started getting sick with scurvy, a disease caused by vitamin C deficiency. Back then, scurvy was a leading cause of death in sailors on long sea voyages. Lind split the sailors into six groups, and each group got a different kind of food supplement with their regular meals. Remarkably, the group that ate two oranges and one lemon every day for six days until the ship ran out of citrus fruits recovered. Lind documented the study in his book ‘A Treatise of the Scurvy’. This historic experiment done nearly 275 years ago with only a small group of sailors and modest resources laid the groundwork for modern clinical trials.
Clinical trials can be ‘interventional’ or ‘observational’. Lind’s historic clinical trial was interventional since he changed what the sailors’ were eating during the meals. An observational clinical trial is one where researchers do not treat the participants or alter their environment. For example, researchers studying how eating fruits and vegetables affects long-term health may observe the participants’ eating habits over time without suggesting any changes. Most clinical trials involving new vaccines, drugs, medical devices, or procedures fall into the interventional category.
To conclusively understand if a treatment works or not, ‘placebo-controlled’ clinical trials were introduced in the 1800s. A placebo is a dummy or mock treatment that helps researchers account for the ‘placebo effect’. This is a psychological response noticed in patients who feel their condition improves even when receiving a placebo.
If you probe further into modern clinical trials, you’re bound to encounter terms like ‘randomized’, ‘double-blind’, and ‘single-blind’. Let’s make sense of these terms. ‘Randomized’ means that the clinical team divides the participants into different groups randomly. A computer does it nowadays to avoid bias. In a ‘double-blind’ study, the medical team running the trial and the participants do not know whether they get the actual treatment or a placebo. Whereas in a single-blind study, the medical team knows the treatment for each participant.
For a new drug, device, or medical treatment to reach patients, it must pass through four clinical trial phases.
- Phase I – The first ‘in-human’ study with ten to fifty healthy individuals to determine safety and tolerance. In this phase, side effects are usually apparent, and a safe dosage range is determined.
- Phase II – Involves around a hundred participants, including patients, and is usually ‘placebo-controlled’ and ‘double-blind’. Several dosage regimes are experimented with to finalize the safe and effective dose.
- Phase III – A large-scale study with thousands of participants, often across several countries, to evaluate final safety and efficacy. After this phase, the Food and Drug Administration (FDA) approval is received.
- Phase IV – The test or treatment is now approved and widely available, but post-marketing studies are performed to understand long-term benefits or side effects.
In addition, modern clinical trials adhere to stringent guidelines established by Good Clinical Practice (GCP) and undergo thorough review and approval by internal review boards and ethics committees. This rigorous ethical oversight wasn’t the norm in the past. The notorious MK-Ultra and Tuskegee Syphilis Study are some instances of unethical human experimentation. Nowadays, informed consent from clinical trial participants is mandatory. Furthermore, a comprehensive risk-benefit analysis is always performed to ensure that benefits outweigh potential harm to research participants.
Incredible advances have been made in clinical trial design and execution over the last two centuries. The timeline for developing a new vaccine typically takes 5 to 10 years, but in the case of the COVID-19 pandemic, vaccines were ready in under a year. Groundbreaking scientific advancements and collaborative efforts allowed clinical trials to be conducted and shots to reach our arms at unprecedented speed. Let’s take a moment to appreciate the evolution of clinical trials – from a simple experiment on a ship to testing the vaccines that ended a devastating pandemic.