Suzanne White Junod, Ph.D. 1

The function of the controlled clinical trial is not the “discovery” of a new drug or therapy. Discoveries are made in the animal laboratory, by chance observation, or at the bedside by an acute clinician. The function of the formal controlled clinical trial is to separate the relative handful of discoveries which prove to be true advances in therapy from a legion of false leads and unverifiable clinical impressions, and to delineate in a scientific way the extent of and the limitations which attend the effectiveness of drugs.

William Thomas Beaver 2

The U.S. Food and Drug Administration has evolved as one of the world’s foremost institutional authorities for conducting and evaluating controlled clinical drug trials.

Ancient civilizations relied on medical observation to identify herbs, drugs and therapies that worked, and those that did not. Beginning in the early twentieth century, therapeutic reformers in the United States and in other places began to develop the concept of the “well-controlled” therapeutic drug trial. This concept, included, for example, laboratory analysis followed by clinical study. As medical historians have pointed out, however, these early reformers’ therapeutic vision often far exceeded their clinical and experimental grasp. 3 In 1938, a newly enacted U.S. Food, Drug, and Cosmetic Act subjected new drugs to pre-market safety evaluation for the first time. This required FDA regulators to review both pre-clinical and clinical test results for new drugs. Although the law did not specify the kinds of tests that were required for approval, the new authority allowed drug officials to block the marketing of a new drug formally or delay it by requiring additional data. The act also gave regulators limited powers of negotiation over scientific study and approval requirements with the pharmaceutical industry and the medical profession. A worldwide drug disaster in 1961 resulted in the enactment of the 1962 Drug Amendments, which explicitly stated that the FDA would rely on scientific testing and that new drug approvals would be based not only upon proof of safety, but also on “substantial evidence” of a drug’s efficacy [i.e. the impact of a drug in a clinical trial setting]. Increasingly, responsibility for testing standards previously established as voluntary by the American Medical Association’s (AMA) Council on Drugs, the U.S. Pharmacopeia and the National Formulary were taken up by the FDA. Since 1962, FDA has overseen substantial refinements to the broad legal requirement that post-1962 new drugs be approved on the basis of “adequate and well-controlled” studies. 4

Medical Observation As Precursor to Clinical Trials

Clinical trials are prospective, organized, systematic exposures of patients to an intervention of some kind (drug, surgical procedure, dietary change). The earliest recorded therapeutic investigations, however, lacked the rigor of a modern clinical trial. Based largely on observations and tested through time by trial and error, ancient medicine such as that practiced by the Egyptians, Babylonians, and Hebrews was closely allied with religion. Nonetheless, some of these early medical investigations did yield some important successes in fields such as minor surgery and orthopedics. The Hebrews, in particular, excelled in public hygiene, but even their public health strictures, so effective in preventing epidemic disease, were observational and experiential rather than experimental. 5

The Babylonians reportedly exhibited their sick in a public place so that onlookers could freely offer their therapeutic advice based on previous and personal experience. 6 The first mention of a paid experimental subject came from Diarist Samuel Pepys who documented an experiment involving a paid subject in a diary entry for November 21, 1667. He noted that the local college had hired a “poor and debauched man” to have some sheep blood “let into his body.” Although there had been plenty of consternation beforehand, the man apparently suffered no ill effects.

One of the most memorable successes from an early but earnest clinical trial was actually more of an anomaly rather than a harbinger of great progress in medical experimentation. British naval surgeon James Lind (1716-1794), who had learned of the death of three quarters of a ships’ crew during a long voyage around the world, planned a comparative trial of several popularly suggested “cures” for the scurvy on his next voyage. Twelve men with similar cases of scurvy ate a common diet and slept together. Six pairs, however, were given different “treatments” for their malady. Two were given a quart of cider daily; two an “elixir;” two seawater; two a remedy suggested by the ship’s surgeon (horseradish, mustard and garlic); two vinegar; and the final two were given “oranges and lemons” daily. One man who received the oranges and lemons recovered within six days, while the other recovered sufficiently that he “was appointed nurse to the rest of the sick.” At first Lind questioned his own experimental results, but by the time he published them (1753 and 1757) they were recognized as important. Nonetheless, the British Navy did not supply citrus to its ships until 1795. 7

Although simple observation may provide a starting point for medical study, however, experience has shown that it is rarely efficient at advancing medical knowledge. As one early proponent of planned experimentation in the form of clinical trials remarked, “when we are reduced to [mere] observation, science crawls.” 8 A modern drug regulator is more explicit, acknowledging that modern retrospective [studies], epidemiologic analyses, and astute observations are all instructive. Although clinical trials are not the only way to find things out, the clinical trial is unique. “It is under the investigator’s control, subject not to data availability or chance but to his ability to ask good questions and design means of answering them.” 9

Clinical Trials and the 1938 Food, Drug, and Cosmetic Act

Congress reacted to the tragedy, which killed over 100 people, by enacting a new federal food and drug statute, the 1938 Food, Drug, and Cosmetic Act. A new provision in the act– requiring drug sponsors to submit safety data to FDA officials for evaluation prior to marketing — appeared with relatively little discussion following on the heels of the Elixir Sulfanilamide disaster. “Instead of going to market based on their own assessment of the drug, sponsors had to notify the FDA of their intent to market the drug by submitting an NDA (New Drug Application),” explains Dr. Robert Temple, currently head of FDA’s Office of Medical Policy. Although the new law did not specify any particular testing method(s), the law did require that drugs be studied by “adequate tests by all methods reasonably applicable to show whether or not the drug is safe.”Sponsors were required to demonstrate to FDA that they had carried out all reasonably applicable studies to demonstrate safety and that the drug was “safe for use under the conditions prescribed, recommended or suggested in the proposed labeling thereof.” 18In the future, FDA could use these new tools not only to ban Banbar, but to try and prevent drug disasters rather than merely react to them.

Under the law, there was no true requirement for FDA “approval” or “clearance” of a new drug. Rather, it was presumed that most drugs would be marketed and therefore the default position was “approval.” 19 Under the 1938 Act, the government had sixty days (could be extended to 180 days) to complete its safety evaluation. Form 356, the New Drug Application (NDA), required information about all clinical investigations, a full list of the drug’s components and composition, methods of manufacture including facilities and controls, and copies of both the packaging and labeling of the new drug. If a company had not received a regulatory response at the end of 60 days it could proceed with marketing its new drug.

Regulators adopted many of the standards and rules of evidence first advocated by turn-of-the-century therapeutic reformers. 20 Laboratory analysis akin to that originally conducted by the AMA’s Chemical Laboratory initially screened most new drugs, companies were required to conduct safety studies, and an increasing number of drugs would soon be studied in the kind of clinical (cooperative) drug trials that the AMA’s Council on Pharmacy and Chemistry had advocated, but not conducted, earlier in the century. 21 Animal studies were not required under the 1938 Act to precede human drug trials, but such studies, including animal autopsies, could be requested by regulators as part of the agency’s drug safety review. FDA also began to employ the practice, similar to that of the Council, of consulting expert academic specialists, often before making a final decision on drug approvals. 22

FDA’s statutory authority over products increased as a result of egregious public health disasters, but the associated scientific methodology to evaluate safety and efficacy did not accelerate in tandem. Regulatory work under the new drug safety provisions of the Act was fairly limited, although the new law did sanction factory inspections for the first time and officials were able to eliminate many worthless products submitted for approval to treat serious diseases (i.e. cancer and diabetes) by holding them to be “unsafe” under the statute. 23Regulators could deny an application if the sponsor’s drug application did not include “adequate tests by all methods reasonably applicable to show whether or not such drug is safe for use under the conditions prescribed, recommended or suggested in the proposed labeling thereof.” 24 Occasionally, in interpreting this provision, agency officials recommended labeling changes, including warnings, to sponsors and to the U.S.P., but FDA itself lacked authority under the 1938 Act to determine the text and layout of drug labels. 25 Larger efforts to improve drug testing, prescribing patterns, and patient use and compliance, however, were left to the practice of medicine and medicine’s scientific and professional authorities.

Although FDA had authority under the 1938 Act to establish rules governing the use of investigational drugs, FDA did not employ this authority to regulate clinical trials and clinical trial methodology until 1961. 26 Even though physicians at elite university clinics and members from the AMA Council on Pharmacy and Chemistry all agreed on the importance of standardized drug testing through clinical trials, FDA did not have the authority to require them under the 1938 statute. 27 FDA scientists, however, did begin to exert some influence on the conduct of clinical trials and move in the direction of standardization on the eve of WWII, when they published an article in JAMA on experimental design, proper clinical trial methods, and methods of data analysis. 28 Their article, however, was published as a Report under the auspices of the AMA’s Council on Pharmacy and Chemistry and was accompanied by a disclaimer to the effect that the “outline” presented in the report was “offered as an objective, a pattern, and not a regulation.” During WWII, the agency actively promoted drug testing standards in the face of increased wartime expenditures for drug trials designed to answer important questions about the safety and use of many new drugs for the war effort. 29 An important breakthrough in clinical trial design followed from the shortages of a new drug, streptomycin, shortly after the war.

Following war trials of penicillin, British epidemiologist and biostatistician, A. Bradford Hill, was faced with the task of testing a promising antibiotic, streptomycin, against tuberculosis. Researchers in the United States studying the same drug had ample supplies and led to more effective treatment for patient subjects but produced less conclusive clinical trial data. 30 Hill and his colleagues, however, were faced with a severe shortage of the streptomycin drug they were studying. In post-war Britain, the central government could not afford to purchase more of the drug. Scarcity and expense, therefore, justified their decision to formally but randomly assign patients to control groups and treatment groups. This eliminated a well-known form of treatment “bias” in which physicians are known to select their healthier patients for experimental treatment leaving sicker patients in the control group. Hill’s study was a true randomized study. It was not, however, “double blinded” – another way of insuring the objectivity of a trial by neutralizing the power of “suggestion.”

In a double-blind clinical drug study, trials are designed in such a way that neither the patient nor the researcher knows who is receiving the treatment drug. 31 In Hill’s study, streptomycin required injection, and the researchers did not wish to use inert injections. However the lack of true double-blinding had little impact on the results, since Hill was able to show conclusively that streptomycin could cure tuberculosis. When the results of his study were published in 1948, Hill’s use of concurrent controls (randomized, controlled) was praised as having ushered in “a new era of medicine.” 32

Hill and his North American colleagues, including Harry Gold at the Cornell Medical School, began to map out general criteria for drug testing and specify stages through which drug development should proceed. Patients were to be selected through formal criteria and then randomly separated into treatment and control groups; trials were to be double-blinded and employ objective diagnostic technologies; and drug doses were to be administered according to a fixed schedule, while patient observations were to be charted at uniform intervals. Their success set the stage for the subsequent development of more sophisticated clinical trial designs while professional collaborations allowed statisticians to increasingly dominate the conduct of clinical trials in the U.S.33 Nonetheless, one expert estimated in 1951 that 45% of clinical trials had no control groups. 34

After WWII, medical research increased exponentially in the United States. In 1950, funding for medical and scientific research was $161 million dollars. By 1968, this figure had grown to over $2.5 billion. 35 The National Institutes of Health (NIH) opened its Clinical Center in Bethesda, MD as a research hospital in 1952, and NIH’s extramural, peer-reviewed research grant system soon supported biomedical and clinical research projects at institutions around the country. Centrally planned clinical research projects, including cooperative trials, were soon eclipsed as grants supported the work of individual medical investigators, many of whom designed and conducted their own clinical trials in collaboration with other colleagues. Ethical concerns about the protection of research subjects further complicated clinical trial design, post-war, particularly following reports of the gross medical abuses carried out on Nazi prisoners of war. 36 Ethical debates over methodology often centered around questions concerning when it was appropriate to use placebo controlled trials and when it was preferable to compare active treatments in evaluating new therapies. The NIH Clinical Center adopted a policy that placed much of the responsibility for safeguarding human subjects of biomedical research with principal investigators. Research involving normal human volunteers was to be formally reviewed by panels of scientists, but there was virtually no discussion about any potential role for the federal government in regulating medical research. Meanwhile, both NIH and FDA gave clinical investigators wide latitude in the pursuit of their research objectives.37

Sulfa drugs and antibiotics, among other therapies for acute diseases, had provided important experience in evaluating new drugs, but increasingly after WWII, investigators and regulatory officials began to rely on increasingly sophisticated trial designs to study effectiveness in whole new classes of drugs for chronic, rather than acute conditions. Blood pressure and anti-arrhythmic drugs (1950s/60s), drugs for tuberculosis, cancer, heart disease, and the oral contraceptives (1960) were all approved using new and increasingly advanced trial methodology involving assessment of data from sometimes tens of thousands of patients. Statisticians insisted on uniform selection criteria for patients in clinical trials, separate treatment and control groups, uniform dosing regimens, and utilized objective evidence from laboratory tests such as blood and urine tests made both before and after treatment. With the aid of a new science of biostatistics, both regulators and regulated industry began to understand, appreciate, and interpret many nuanced components of trial design and their effect on the interpretation of data. 38 Although several kinds of randomized controlled trial methodologies can be useful to researchers and regulators, ultimately, it was the randomized, double-blinded, placebo controlled experiment which became the standard by which most other experimental methods were judged, and it has often subsequently been referred to as the “gold” standard for clinical trial methodology. In situations in which using a placebo seemed unethical, positive (treatment) groups rather than placebo groups were employed and regulators had to learn how to interpret the data stemming from these trials as well, a formidable problem in many cases. 39

The Kefauver Hearings and Drug Critics

In the early 1950s the AMA discontinued many of its drug study activities. It closed its microbiological laboratory used to test new drugs (successor to the Chemical Laboratory) and discontinued its Seal of Acceptance program. Since only drugs that had the Seal could advertise in the pages of AMA periodicals, the discontinuation of this program opened the door for an explosion of advertising (and advertising revenue) in JAMA and other AMA publications. AMA discontinued its inspection of drug plants, its efforts to exert some control over generic drug names, and even a campaign it had instituted to explain and encourage physicians to prescribe using generic names rather than brand names. 40 In their place, the AMA initiated a registry for reporting adverse drug reactions, although it had no mechanism to enforce data collection. 41

Beginning in 1958, hearings on the drug industry held by Senator Estes Kefauver (D-Tennessee) focused unanticipated attention on the quality of drug company sponsored clinical drug research. In particular, the hearings drew attention to the poor state of clinical trial research as it had been conducted (or failed to be conducted) under the 1938 statute. Kefauver announced his hearings on the drug industry – its products and its profitability — after he and his staff had obtained evidence documenting the high markups and exorbitant profit margins that had become evident on prescription drugs, beginning with antibiotics. Yet the hearings soon turned to other topics as the industry tried to defend its profits by asserting the high cost of research, including the costs of conducting clinical trials. As popular with consumers as they proved unpopular with the pharmaceutical industry, these hearings generated important evidence documenting the frequently sorry state of drug testing and advertising as well as the competitive pressures within the industry which supported such practices. Able testimony was offered documenting many poor clinical studies done in support of the marketing of many mediocre drugs. Dr. Louis Lasagna, an expert in clinical pharmacology, testified that it was “shocking that experimental drugs are subject to no FDA regulation of any sort before patients receive them…It is reprehensible for man to be the first experimental animal on which toxicity tests are done, simply because bypassing toxicity tests in laboratory animals saves time and money.” 42 At one point in the hearings a former medical director at Squibb testified that the industry was always pointing out the high costs of research and the fact that so many products failed in the course of research to justify its markups and profit margins. “This,” he agreed, “was true, since it is the very essence of research.” The problem, he quipped, lay in the fact that “they market so many of their failures.” 43 Most new drug products, experts testified, were not improvements over old ones, and most were marketed before clinical studies were published. Many new drugs, in fact, were combinations of older drugs, with or without modification, which gained extended patent life (and profitability) in combination. Adequately controlled comparisons of drugs, Lasagna testified, were “almost impossible to find.” 44

Years later, FDA’s Chief Counsel William Goodrich recalled that during the Kefauver hearings the pharmaceutical industry “stepped right into the bear trap” when it tried to defend itself by touting the high costs of research and development for new drugs.

That just focused attention on these various phases of new drug development and promotion…first of all, was it really all that expensive? Were they really doing all that kind of research? And anyone who had looked at any of the New Drug Applications knew, as I knew, that that was all baloney, and what they were saying to us in those early days was essentially a bunch of testimonials. The way drugs were investigated–a physician from the company would go out in the community with some samples and say to the doctor, “I’ve got this new drug for so-and-so. Here’s some samples. Try it out and let us know how you like it.” And they would get back a letter from him: “I tried it out on eight patients and they all got along fine.” That’s the kind of stuff that was coming in for the science. Of course, that was completely unsatisfactory, and as soon as people focused on that, that raised the problem. 45

By the 1960s, following another drug crisis in 1962, there was a growing recognition of the importance of clinical trials in new drug development as well as in clinical medicine. Pharmacologists and medical researchers as well as officials at government agencies such as the Veteran’s Administration and the National Institutes of Health knew more about the conduct of good clinical trials than did the FDA at that time. This changed rapidly, however, beginning with a drug crisis in 1962. Following a pattern first seen in the elixir sulfanilamide crisis which led to changes in U.S. drug regulation in 1938, a similar crisis in 1962 spurred even more widespread changes, both in the U.S. and around the world. In 1961, a popular drug in Europe, a hypnotic known as thalidomide, was discovered to cause severe birth defects and even death in babies when their mothers took the drug early in their pregnancies. Because of the concerns of FDA drug reviewer Dr. Frances Kelsey, the drug was never approved for sale in the U.S. Nonetheless, the drug sponsor had sent samples of the drug to thousands of U.S. doctors who gave the samples to their patients without telling them that the drug was an experimental one, making their patients the unwitting subjects of human drug experimentation. It is believed that there were more than a dozen thalidomide babies born in the United States as a result of this unauthorized “sample” program. As a result of the worldwide thalidomide disaster, countries around the world, including the United States, updated their drug regulatory systems and statutes. “In next to no time,” recalled Frances Kelsey, “the fighting over the new drug laws that had been going on for five or six years suddenly melted away, and the 1962 amendments were passed almost immediately and unanimously.” 46

The IND Process and Clinical Trial Regulation

Prior to the law’s final passage, regulations began to address known problems in the use of clinical trials by the drug industry, indicating that FDA felt more confident in its authority to regulate them, even under the old 1938 statute. 47 New regulations prohibited testing a drug in humans until preclinical studies could predict that the drug could be given safely to people. 48 The 1962 [Kefauver-Harris] Drug Amendments and the 1963 investigational drug regulations themselves introduced many new procedures that strengthened control over investigational new drugs in the United States. 49 One of the most significant was a system of pre-clinical trial notification and approval designed to provide enough information to regulators to demonstrate that it was safe to conduct clinical trials. Under this new system, company drug sponsors were required to file a “notice of claimed investigational exemption for a new drug.” The “notice” was actually a package of materials that a company submitted to FDA for approval prior to starting human trials. The acronym IND (Investigational New Drug) was coined to parallel the acronym NDA (New Drug Application). 50 Technically, an IND is an exemption from the normal pre-marketing requirements for a new drug – namely, the submission and approval of an NDA. 51An approved IND application allows investigators to proceed with new drug trials for a drug under development. The information collected under an IND may later become a part of an NDA submission if the systematic drug tests set up to test the drug are successful. IND’s are also required when a sponsor wishes to restudy a previously approved drug in order to gather data in support of significant labeling changes, advertising changes, changes in route of administration or dose, or any other change that might alter the risk/benefit equation upon which the original approval was based. The IND regulations also led FDA to define more clearly through regulation the “phase” process of drug testing involved in the regulatory approval of a new drug in the 1963 regulations. 52

An IND submission 1) alerts regulators to a sponsor’s intent to begin clinical studies in the United States 2) provides the preliminary animal toxicity data indicating it is reasonably safe to administer the drug to humans 3) provides information about the manufacturing process for the new drug 4) provides chemistry background material 5) describes the initial clinical study being proposed, focusing on its safety measures (who is conducting the trials, their qualifications and facilities; and the type of study population involved – volunteers, sick patients, prisoners, women, men, children, etc.) and 6) provides assurance than an IRB (Institutional Review Board) will approve the study protocol before the study begins. In addition to the IND submission itself, every investigator participating in the study must sign a form, maintained by the sponsor, indicating their qualifications, the location of the research facility where the study will be conducted, and the name of the IRB responsible for reviewing and approving the study protocol. Investigators must sign commitments to

  1. conduct the clinical study in accordance with the IRB approved protocol
  2. personally conduct or supervise the conduct of the investigation
  3. inform potential subjects that the drugs are being used for investigational purposes and
  4. report to the sponsor adverse events that occur in the course of the investigation.

Efficacy Under the 1962 Drug Amendments

A new and key provision in the 1962 amendments was the requirement that, in addition to the pre-market demonstrations of safety already required under the 1938 Act, future new drugs would also have to be demonstrated “efficacious” prior to marketing. This provision required controlled trials that could indeed support claims of efficacy. The 60 day approval “default” under the 1938 Act was removed. New drugs had to have positive and specific, and increasingly detailed approval from FDA to go to market and FDA was given the authority to set standards for every stage of drug testing from laboratory to clinic. In addition, FDA could require market withdrawals for the first time and establish “Good Manufacturing Practices (GMP’s)” to govern drug manufacturing. In order to prevent another “thalidomide disaster,” Congress inserted language in the 1962 Drug Amendments requiring that investigators maintain personal supervision over clinical investigations and agree not to give the drug to other investigators. Senator Jacob Javits (D-New York) was particularly concerned about the fact that so many people had taken thalidomide without knowing that it was an experimental drug. Even many doctors that FDA had surveyed had been confused as to the status of the drug at the time they gave it to their patients. 53 Javits sponsored what became a very important provision of the law itself: the requirement that informed consent be obtained from all research study subjects so that patients would have to be specifically informed if a drug they were being given or prescribed was “experimental,” something that had not happened in the case of thalidomide.

The legal language employed in the statute, which laid out the criteria that would be used in assessing efficacy in support of a new drug approval, was not particularly stringent. The law required that there be “substantial evidence” that the drug “will have the effect it purports or is represented to have under the conditions of use prescribed, recommended, or suggested in the proposed labeling.” Lawyers have concluded that Congress could have established a more stringent drug approval process simply by using stronger legal terminology. The fact that terms such as “preponderance of evidence” or “evidence beyond a reasonable doubt” were not used indicates that Congress did not intend to set the bar for efficacious new drug approvals too high. 54 New drugs did not have to be superior to other drugs on the market nor did “substantial evidence” mean evidence “so strong as to convince everyone.” 55

The strength in the statutory language, however, came not from the evidentiary requirements but from a last minute-compromise over study methods. 56 Sponsors were only required to provide “substantial evidence” of effectiveness, but that evidence had to be based on “adequate and well-controlled studies,” i.e. clinical trials. Without defining either “adequate” or “well-controlled,” the law paved the way for experts in the field to establish the criteria that would define both terms under the new statute. Although the law did not define a well-controlled study, testimony before Congress made it clear that it included, as a minimum, the use of control groups, random allocation of patients to control and therapeutic groups, and techniques to minimize bias including standardized criteria for judging effectiveness. 57 A poorly designed trial, it was argued, not only wasted resources, but it unnecessarily put patients at risk.

Clinical Trial Regulations of 1970 and the DESI Process

Over the next eight years FDA worked diligently to implement the 1962 drug amendments. In the early years after passage of the 1962 amendments, sponsors were more or less “on their own” with little guidance from FDA about what would be acceptable except in the form of an NDA non-approval letter which did explain why the sponsors’ submission was considered inadequate. Concerns that FDA might become overly “vested” in the development of a commercial drug product led to an abundance of caution in agency/sponsor interactions. According to one official, “There was, in fact, explicit concern that too much participation by FDA staff in the development process would leave the Agency unable to be properly neutral and analytical when the resulting data were submitted as part of an NDA.” 58 Over the years, however, as Robert Temple notes, FDA has become increasingly involved in the development of specific drug products including the design of clinical trials, “reflecting the view that the public, the industry, and the FDA are poorly served by drug development efforts that are poorly designed or inadequate and that therefore waste resources and delay availability of therapy.” 59 In the late twentieth century, Congress itself has even begun mandating meetings between regulators and industry concerning the design and conduct of clinical trials deemed particularly important for any of a number of reasons. 60

Regulatory officials soon began to receive an invaluable education in the conduct of clinical trials as a result of the agency’s Drug Efficacy Study (DES). The 1962 Drug Amendments required FDA to re-review all drugs that had been approved under the 1938 Food, Drug, and Cosmetic Act (1938-1962) on the basis of safety alone, this time looking for evidence of efficacy. Examining all pre-1962 NDA’s posed a daunting task for FDA so in 1966, FDA contracted with the National Research Council of the National Academy of Sciences to perform the review. 61 Thirty panels of experts reviewed specific drug categories using evidence obtained from FDA, the drug’s manufacturer, scientific literature, and the personal expertise of the panel members themselves. Their ratings on each claim for a drug fell into six categories: effective; probably effective; possibly effective, ineffective, effective but, and ineffective as a fixed combination (combination drugs for which there was no substantial reason to believe that each ingredient adds to the effectiveness of the combination.). 62

FDA was challenged to devise a method by which those drugs ruled ineffective could be legally removed from the market along with other “me-too” drugs – drugs with the same essential ingredient profile. FDA’s initial legal efforts to remove bioflavonoid drugs and an UpJohn fixed combination drug called Panalba were enjoined by the courts. 63 Faced with the prospect of conducting formal administrative hearings on every drug it proposed to have removed from the market, the agency changed its approach, led by FDA’s Director of the Bureau of Medicine (and later Commissioner) Dr. Herbert Ley. Ley supported the drafting, publication and implementation of regulations defining “substantial evidence” leading to a showing of effectiveness under the 1962 Amendments. These “evidence rules” had two separate components but companies wishing an administrative hearing on the proposed withdrawal of their pre-1962 drug would have to meet both criteria. 1) The first formally specified the scientific content of “adequate and well-controlled clinical investigations, including clinical investigations, by experts qualified by scientific training and experience to evaluate the effectiveness of the drug involved,” under the 1962 statute. Well-controlled trials did not have to be placebo controlled– they could have active controls, or even historical controls– but the regulations stated clearly that “uncontrolled studies are not acceptable evidence to support claims of effectiveness.” 64 No hearing would be granted unless there was a “reasonable likelihood” that such evidence would be forthcoming. 65 2) The second required the submission of positive results from at least two clinical studies in order to escape an automatic withdrawal of approval for the drug without a hearing. 66 The courts upheld the agency’s new approach and according to Peter Barton Hutt, FDA’s Chief Counsel from 1971-1975, no hearings were deemed necessary.

By the end of 1971, FDA had disposed of dozens of requests for hearings on the revocation of NDA’s. In no instance had it determined that a manufacturer’s supporting data were sufficient to justify a hearing. One explanation of this striking consistency is that the agency’s substantial evidence regulations embodied requirements for clinical investigations that few pre-1962 studies could meet. The drugs it initially selected for withdrawal, those evaluated by the NAS-NRC as “ineffective” also presented the easiest targets. But it was becoming obvious that a manufacturer would have to make an overwhelming showing to persuade FDA to expend the time and resources that even one hearing would require. 67

The results of the DES study led to recommendations soon implemented through the Drug Efficacy Study Implementation (DESI), which removed over 1000 ineffective drugs and drug combinations from the marketplace. As part of this process, FDA drug reviewers themselves published hundreds of critiques of the clinical studies that had been submitted for approved new drugs in support of the safety requirements mandated in the 1938 statute. Most of these old studies, recalled Robert Temple, who began work at FDA in 1972, were “inadequate beyond belief.” As late as the 1960s and early 1970s he notes, “You would be horrified [at the clinical trial data] submitted to the agency. There was often no protocol at all. There was almost never a statistical plan. Sequential analyses were unheard of. It was a very different world.”68

Positive changes in clinical trial methodology, however, soon began to be evident in new NDA and ANDA submissions. “Everyone,” notes Temple, “came to believe that trials should have a prospectively defined and identified endpoint, a real hypothesis and an actual analytical plan.” An international, professional organization, the Society for Clinical Trials, was organized in 1978 and began to develop and discuss clinical trial design and the analysis of clinical trials in government as well as industry sponsored clinical trial research. FDA assisted the drug industry during the late 1970s, by collaborating with external advisory committees and conducting FDA-industry workshops in support of the development of nearly 30 drug class clinical guidelines which described in detail the study designs and expected data required for particular therapeutic classes such as drugs for ulcer disease, depression, or angina.

During the AIDS epidemic of the 1980s, regulators were again pushed to consider the essential requirements of a meaningful clinical trial. FDA had created a special class of investigations known as the “Treatment IND” in 1987 in which patients could receive an investigational drug outside the normal “blinded” research setting. 69Although data from patients under this protocol was still collected, the program was not especially conducive to the treatment of large numbers of patients, especially those desperately sick patients who pushed for access to drugs at their earliest stages of development. 70
In 1985 regulations recognized what had already become a central tenet of modern drug evaluation by formalizing the requirement that approvals be based on an “integrated summary of all available information about the safety of a drug product.”71 Congress itself mandated in 1988 that each AIDS drug IND must be publicly disclosed in a computer-accessible data base to facilitate access by patients with AIDS, and formally recognized the importance of FDA’s Treatment IND program in support of AIDS patients. 72 Although some AIDS organizations requested agency support of “open clinicals” in which a drug sponsor could allow any patient access to ongoing trials with the support of their physicians, FDA refused to allow such easy access. “The more open-ended the design of a clinical trial,” noted agency officials, “the less likely the chance the trial will provide answers.” 73 Between 1990 and 1992 guidelines were proposed and negotiated, and regulations finally approved by FDA establishing a “parallel track approval” process in which special categories of drugs would be expedited during the review process and a wider group of patients would have access to the drug than under normal procedures. 74

Beginning in the mid 1980s, FDA has focused on improving the analysis of data from clinical trials. One lesson learned from the AIDS epidemic and the concomitant development of clinical trials necessary to test drug products for its treatment is the scientific utility of surrogate endpoints in certain circumstances. Some of this data analysis has been motivated by sponsors’ interest in presenting evidence of clinical effectiveness through measurements of biomarkers and evaluation of “surrogate endpoints.” Surrogate endpoints measure outcomes that are not clinically valuable by themselves (lowered cholesterol, blood pressure, elevated t-cell counts) but are thought to correspond with improved clinical outcomes (decreased heart disease or stroke, fewer opportunistic infections for AIDS patients). FDA approved the first statin drug, for example, in 1987, based on the surrogate of lowering blood cholesterol. 75FDA is cautious, however, in accepting surrogates and usually requires continued post-market study to verify and describe continued clinical benefits. In 1992, new regulations for the accelerated approval of new drugs gave the agency explicit authority to rely on a surrogate marker. 76

In 1994, FDA made changes in its policies designed to facilitate women’s participation in the earliest phases of clinical drug trials.77 Most recently, FDA has issued guidelines promoting greater study and better analysis of patient subgroups including drugs in the elderly, separate analysis of trial data for both genders, and pediatric studies as well as dose-response information. 78

The Future

In an era in which health care costs are rising at rates far higher than the rate of inflation and the nation faces the challenge of promoting the health of the “boomer” generation during its retirement years, there have been cries for more comparative drug studies, in part to help contain drug costs. Greater knowledge of genetic science and the ability to conduct more nuanced analyses of drug trial data, including retrospective meta-analyses, have also helped fuel optimism over the future of personalized medicine. In the past the drug industry has concentrated on developing so called “block-buster” drugs. The large scale, randomized clinical trial has been critical in demonstrating the safety and efficacy of these drugs. Many, however, are predicting that the future of medicine points toward developing drugs and diagnostics to treat sub-sets of patients who may respond to one treatment but not another because of genetic and other factors. This has led many to speculate on the future of randomized trials. “The randomized clinical trial is excellent methodology if you want to understand, on average, whether one treatment is better than another treatment,” notes John Bridges, assistant professor at Johns Hopkins School of Public Health, “but if we think about a distribution of outcomes, no single person in the health care system is the average.” 43 Personalized medicine presents challenges of its own, including increased costs for researchers testing drugs and patients taking them. It seems more likely that better analysis of clinical trial data, already being encouraged by the FDA, and pursued by both researchers and drug sponsors as the first step towards a more personalized perspective on drug development, will be an integral part of the evolution of personalized medicine, while continuing to add to our overall knowledge of the safety and effectiveness profiles of medicines and therapeutics already on the market. The randomized clinical trial is unlikely, in either scenario, to go the way of the dinosaur.

Originally published as “FDA and Clinical Drug Trials: A Short History,” in A Quick Guide to Clinical Trials, Madhu Davies and Faiz Kerimani, eds. (Washington: Bioplan, Inc.: 2008), pp. 25-55.

    1. FDA History Office, White Oak Building 1, room 1204, 10903 New Hampshire Avenue, Silver Spring, Maryland
    2. Affadavit of William Thomas Beaver, M.D. in the case of Pharmaceutical Manufacturers Association v. Robert H. Finch and Herbert Ley, Civil Action No. 3797, United States District Court for the District of Delaware. Dr. Beaver was the clinical pharmacologist at Georgetown University who is credited with drafting the initial regulations defining “adequate and controlled” clinical studies. (personal correspondence, Peter Barton Hutt Esq. and Dr. Robert Temple, FDA, December, 2007, FDA History Office Files)
    3. Harry Marks The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900-1990 (Cambridge: Cambridge University Press, 1997). Hereafter referred to as Marks, Progress.
    4. The 1938 Act recognized the purity standards published by the U.S.P. and the National Formulary. The U.S.P under the new law was responsible for all packaging and labeling standards while FDA enforced these standards. See Arthur Daemmrich, “Pharmacovigilance and the Missing Denominator: The Changing Context of Pharmaceutical Risk Mitigation, Pharmacy in History 49:2 (2007), p. 64.
    5. John P. Bull, “The Historical Development of Clinical Therapeutic Trials,” Journal of Chronic Diseases 10:3 (1959), p. 219.
    6. Ibid.
    7. Bull, p. 228.
    8. Geoffrey Edsall, “A Positive Approach to the Problem of Human Experimentation,” in Experimentation, p. 279.
    9. Bob Temple, “Government Viewpoint of Clinical Trials,” Drug Information Journal 82: January/June (1981), p. 10.
    10. Marks, Progress, p. 12.
    11. Susan Lederer, Subjected to Science: Human Experimentation in America Before the Second World War (Baltimore: Johns Hopkins, 1995), p. xiv.
    12. Ibid, p. 19.
    13. Arthur Daemmrich, “Pharmacovigilance and the Missing Denominator,” p. 64.
    14. Harry F. Dowling, “The Emergence of the Cooperative Clinical Trial,” Transactions and Studies of the College of Physicians of Philadelphia 43 (1975), pp. 20-29. Marks, Progress, p. 53-54.
    15. Brown v. Hughes (94 Colo. 295, 30 P. 2d 259 (1934). Ironically, at this time, many if not most of these “accepted” clinical practices were not based upon rigorous scientific study.
    16. Fortner v. Koch (272 Mich. 273; 261 NW 762 (1935) as commented on by William J. Curran, “Governmental Regulation of the Use of Human Subjects in Medical Research: The Approach of Two Federal Agencies,” in Experimentation with Human Subjects, ed. Paul A. Freund, pp. 402-455. Hereafter cited as Experimentation.
    17. 35 Fed. Reg. 7250 (May 8, 1970). The 1970 regulations recognized comparative evidence from no-treatment and treatment groups, placebo controlled trials, active treatment trials (comparing treatments), and historical controls.
    18. Initial regulations under the 1938 Act (issued December 28, 1938), required the person who introduced an investigational new drug into interstate commerce to obtain from the expert (qualified by scientific training and experience to investigate the safety of drugs, i.e. the clinical investigator) “a signed statement . . . that he has adequate facilities for the investigation to be conducted by him and that such drug will be used solely by him or under his direction for the investigation.” This was, of course, unless or until an NDA was approved by FDA.
    19. Robert Temple, “Development of Drug Law, Regulations, and Guidance in the U.S,” Principles of Pharmacy (1994), p. 1643.
    20. Marks, Progress, p. 72.
    21. Dowling, “The Emergence of the Cooperative Clinical Trial,” p. 25-29.

    See, for example, Suzanne Junod and Lara Marks, “Women’s Trials: The Approval of the First Oral Contraceptive Pill in the United States and Great Britain,” Journal of the History of Medicine and Allied Sciences, 57: 2 (April 2002), pp. 117-160.

      1. Dan Carpenter, draft, Reputation and Power: Organizational Image and Pharmaceutical Regulation at the FDA, p. 97 and fn. 122. Ralph Smith in Bureau of Medicine during 1950s “A drug is unsafe if its potential for inflicting death or physical injury is not offset by the possibility of therapeutic benefit.” [“safety implies efficacy doctrine”] Cited by Justice Thurgood Marshall, U.S. v. Rutherford, No. 78-695, 442 U.S. 553, fn. 9.
      2. 52 Stat. 1040, 21 U.S.C. June 25, 1938.
      3. Arthur Daemmrich, “Pharmacovigilance and the Missing Denominator,” p. 64.
      4. The reference to investigational drugs under section 355(i) of the 1938 Act was brief. “The Secretary shall promulgate regulations for exempting from the operation of this section drugs intended solely for investigational use by experts qualified by scientific training and experience to investigate the safety of drugs.” Food, Drug, and Cosmetic Act, 52 Stat. 1040 (75th Cong. 3d Sess (1938)).
      5. Arthur A. Daemmrich, Pharmacopolitics: Drug Regulation in the United States and Germany (Chapel Hill: University of North Carolina Press, 2004), p. 24. Hereafter cited as Daemmich, Drug Regulation.
      6. Winkle, Harwick, Calvery, and Smith, “Laboratory and Clinical Appraisal of New Drugs,” JAMA 126 (1944), 956-61.

      Daemmrich, “Pharmacovigilence” p. 51.

      1. Dowling, “The Emergence of the Cooperative Clinical Trial,” p. 24.
      2. Ibid. p. 52.
      3. Silverman and Chalmers, “Sir Austin Bradford Hill,” p. 102.
      4. Daemmrich, “Pharmacovigilence,” p. 52.
      5. Ross, “Use of Controls in Medical Research,” JAMA 145 (1951), pp. 72-75.
      6. Curran, Experimentation, p. 402.
      7. At the 1946 Nuremberg trial of 23 Nazi medical professionals, only a handful of victims survived to confront their torturers, out of hundreds of thousands of prisoners. The defendants were charged with offenses ranging from subjecting test subjects to extremes of altitude and temperature to using them as human cultures to test vaccines for typhus and malaria. In light of the testimony, an international code of ethics to protect all subjects of human research was written and adopted by most medical researchers in countries worldwide. The Nuremberg Code accepted and codified ethical standards which the 23 defendants had grossly violated, and thus became the first internationally recognized code of medical research ethics. Its stated goal was not merely to “prevent experimental abominations in the future but to increase the protection of the rights and welfare of human subjects everywhere by clarifying the standards of integrity that constrain the pursuit of knowledge.” The first principle of the Nuremberg Code stressed the importance of obtaining “informed consent” from research subjects. The code also emphasized that human studies should not be random or unnecessary, that animal studies should be undertaken before human studies, and that surveys of the natural histories of disease should be undertaken before subjecting human subjects to laboratory-induced disease.
      8. Curran, Experimentation, p. 508. Research involving normal human volunteers was to be formally reviewed by panels of scientists.
      9. Bradford Hill, “Medical Ethics and Controlled Trials” British Medical Journal 1 (April 20, 1963), pp. 1043-49.
      10. Susan Ellenberg and Robert Temple, “Placebo Controlled Trials and Active-Control Trials in the Evaluation of New Treatments,” Annals of Internal Medicine 133:6 (Sept. 19, 2000), pp. 455-470; ICH E-10 (Choice of Control Group and Related Issues in Clinical Trials) @,726738&_dad=portal&_schema=PORTAL (site last visited 10/17/07).
        Robert Temple, “Government Viewpoint of Clinical Trials, Drug Information Journal 82 (1981), pp. 10-17.
      11. Richard Harris, The Real Voice (New York: The McMillan Company, 1964), p. 126.
      12. Daemmrich, “Pharmacovigilance and the Missing Denominator,” p. 64-65.
      13. ‘Statement on S 1552″ Louis Lasagna, Drug Industry Anti-Trust Act, p. 1083.
      14. Ibid. p. 78-79.
      15. Ibid, testimony of Dr. Louis Lasagna, p. 8139.
      16. Oral History Interview with William Goodrich, FDA History website, Last visited 10/22/07.
      17. Frances Kelsey, Autobiographical Reflections, p. 71. FDA History Office.
      18. Notice of Proposed Rulemaking, 27 Fed Reg 7990 (August 10, 1962).
      19. Specifically, the requirement was that before clinical testing could proceed, drug sponsors had to submit “reports of pre-clinical tests (including tests on animals) of such drug adequate to justify proposed clinical testing.” Notice of Proposed Rulemaking, 27 Fed Reg 7990 (August 10, 1962).
      20. 76 Stat.780 (October 10, 1962) PL 87-781; 28 Fed. Reg. 179 (January 8, 1963); 28 Fed. Reg. 5048 (May 20, 1963); 28 Fed. Reg. 10972 (October 11, 1963).
      21. Frances Kelsey, Autobiographical Reflections, p. 71. FDA History Office.
      22. Robert Temple, “Development of Drug Law, Regulations and Guidance in the U.S.” chapter 113 Principles of Pharmacology, pp. 1643-1664 (1994), p. 1644.
      23. 28 FR 179 (January 8, 1963) 130.3 (a)(10). See also, Robert Temple, “Current definitions of phases of investigation and the role of the FDA in the conduct of clinical trials” American Heart Journal 139: 2000: S133-S135.
        Clinical trials today are referred to by regulators, clinicians, and investigators as being in or having completed Phase I, Phase II, Phase III, and even Phase IV trials (post-marketing studies). There may be considerable overlap, but in general, Phase I study provides the first human studies of a new drug either in patients or in human volunteers. Although the number of participants can vary, Phase I trials usually involve twenty to eighty people. These early trials can provide early evidence of effectiveness, but they are designed to furnish greater understanding of the experimental drug’s safety including side effects in relation to drug dose. Ideally, a Phase I study should be designed to provide enough information about the drug to design a well-controlled Phase II study.
        A Phase II study is the first controlled clinical study to evaluate the effectiveness of a drug for a specific therapeutic use in patients. It is a well controlled, closely monitored study, usually with no more than a few hundred patients. Such studies look at the effects of treatment on symptoms or on a surrogate for a clinical outcome (i.e. lowered blood pressure, decreased viral load, etc.). Ideally such studies are double-blind placebo-controlled investigations in which patients are randomly assigned to a drug treatment group or a placebo group and neither the patient nor the investigator knows, until the end of the trial, which option the patient received. Phase II studies are also the first to consider the risk of a drug’s side effects.
        Phase III drug trials are reserved for experimental drugs which have shown at least some evidence of effectiveness in previous trials. They involve large numbers of patients (several hundred to several thousand) and are designed to gather enough information on safety and effectiveness to allow an adequate assessment of a risk/benefit ratio for the study drug as well as for the preparation of material for physician’s labeling. They also use a broader patient population and can be designed to gather longer term safety and effectiveness data as well as data to establish optimum drug dosing. Phase III trials also typically have a data monitoring committee overseeing the collection of data during the trials.
      24. Frances Kelsey, “Autobiographical Reflections,” p. 73. FDA History Office.
      25. “In American legal terms, “substantial evidence” is not a high standard; indeed, it has been described by a former FDA chief counsel as somewhere between a “scintilla and a preponderance.” Ibid.
      26. Robert Temple, “Development of Drug Law, Regulations and Guidance in the U.S,” p. 1644.
      27. Harris, The Real Voice (New York: McMillan Press, 1964), pp. 204-205. Harris’ account of the final negotiations over the 1962 Amendments makes it clear that industry did not fully appreciate the significance of the phrase “adequate and well-controlled investigations” at the time it agreed to it. Counsel for the Department of Health, Education, and Welfare immediately grasped the significance and remarked that [the language adopted] “gives us all kinds of power – especially the word ‘adequate’ – to make sure that drugs do what is claimed for them.”
      28. Animal studies were not mandated under the new law, but within a few years following passage of the 1962 amendments, a fairly standardized set of animal toxicology studies to precede and support human trials was in place. By the 1970s a drug for chronic use would generally be required to be tested in two animal species for the full lifetime of the animal at the maximum tolerated dose. Temple, Principles, p. 1646. New York Academy of Medicine: Committee on Public Health, “The importance of clinical testing in determining the efficacy and safety of drugs,” Bull. N.Y. Acad. Med. 38: 415-439, 1962.
      29. Robert Temple, “Development of Drug Law, Regulations, and Guidance in the U.S., in Principles of Pharmacology, 1994, p. 1646.
      30. Ibid., p. 1647.
      31. During the mid-1970s, allegations of a “drug lag” in the approval of new drugs made officials more willing to meet with sponsors at the end of Phase 2 drug testing. In Congress enacted the Orphan Drugs Act to encourage the development of drugs to treat rare diseases. Sponsors of orphan drugs were offered the right to ask for FDA assistance in their research planning. In 1987, revised IND regulations offered more meetings to sponsors, though “primarily” for IND’s involving NME’s or major new uses of marketed drugs. Requests for meetings, the regulations stated, would be honored “to the extent that FDA’s resources permit.” In 1988 regulations designed to facilitate development of drugs for life-threatening or debilitating diseases also allowed sponsors of such drugs to request earlier meetings with regulators.
      32. 31 Fed. Reg. 9425 (July 9, 1966) and 31 Fed. Reg. 13014 (October 6, 1966).
      33. Richard A. Merrill and Peter Barton Hutt, Food and Drug Law: Casebook and Materials (Mineola: Foundation Press, 1980), p. 373.
      34. Panalba established the legal validity underlying FDA’s current rules requiring a combination drug to show that the combination product is more effective than each component used separately.
      35. 34 Fed Reg 14596 (September 19, 1969), 130.12 (a)(5) and (7)(b)(iii).
      36. Oral History Interview, William Goodrich, FDA History website, “Having talked about the adequate and well-controlled clinical study regulations, I want to be sure I attribute to Herb Ley full credit for that…it was Herb Ley who really put that thing through and gave it some scientific stature that it otherwise wouldn’t have had. Paradoxically, Herb was doing this, which is one of the greatest things that ever happened to Food and Drug, yet at the same time he was losing his job over cyclamates.”
      37. 34 Fed. Reg. 14596 (September 19, 1969).
      38. Merrill and Hutt, Casebook, p. 375.
      39. Robert Temple, “How FDA Currently Makes Decisions on Clinical Studies,” Clinical Trials 2 (2005), p. 276.
      40. Robert Temple makes the point that the impetus for the development of the Treatment IND was “perceived as a response to AIDS, but its origins go back to around 1980 before HIV was identified.” Robert Temple, “Development of Drug Law, Regulations and Guidance in the U.S.,” p. 1660.
      41. James T. O’Reilly, Food and Drug Administration (New York: McGraw Hill, 1993), p. 13-87.
      42. According to Robert Temple, “the idea that safety data should be looked at all together, as opposed to study-by-study, is a relatively recent insight.” Temple, “Drug Law Development,” p. 1649.
      43. 102 Stat 3066 (1988). Frank Young, “New Information Available About AIDS Treatments,” FDA Consumer 23:6 (1989).
      44. FDA Talk Paper T-88-74, FDA Responds to ACT-UP Demands (October 5, 1988).
      45. Comment, “Prescription Drug Approval and Terminal Diseases: Desparate Times Require Desparate Measures, 44 Vanderbilt Law Review (1991), p. 925. Kiser, “Legal Issues Raised by Expedited Approval of and Expedited Access to Experimental AIDS Treatments, Food, Drug, and Cosmetic Law Journal 45 (1990), p. 363.
      46. Suzanne Junod, “Statins: A Success Story Involving FDA, Academia, and Industry,” FDLI Update (March/April 2007), p. 41.
      47. Temple, “Drug Law Development,” p. 1656.
      48. Ruth Merkatz and Suzanne Junod, “Historical Background of Changes in FDA Policy on the Study and Evaluation of Drugs in Women,” Academic Medicine 69:9 (1994), 703-707.
      49. Temple, “Development of Drug Law, Regulations and Guidance in the U.S.”, p. 1646.
        “Is Comparative Effectiveness Antithetical to Personalized Medicine,” RPM Report 2:9 (September 2007).
      50. “Is Comparative Effectiveness Antithetical to Personalized Medicine,” RPM Report 2:9 (September 2007).

      Deixe uma resposta

      O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *