From Dollars to Sense

June 2010, Vol 1, No 2 - Comparative Effectiveness

With the inclusion of $1.1 billion in the American Recovery and Reinvest ment Act of 2009 for comparative effectiveness research (CER), this topic has suddenly become a matter of great importance in the clinical and research communities. The idea of directly comparing tests, treatments, and drugs in a head-to-head fashion to determine which are most effective makes intuitive sense, but the process will require fundamental changes in how research is conducted. Moreover, the financial implications associated with CER ensure that all stakeholders (including health services researchers, government payers, and drug and device manufacturers) will need to have careful deliberations about putting these techniques into effect.

Even as the monies for CER have already begun to flow out to individual institutions and organizations, questions remain as to how the technique can best be implemented, where it can be applied, and how this new research paradigm might impact patient care.

A New Way to Approval

A fundamental way of instituting CER, one that would signal a seismic shift in the drug evaluation process, has been put forth by Alec B. O’Connor, MD, MPH,1 among others. In a recent commentary, he argues that the current US Food and Drug Administration (FDA) standards for new drug approval do not allow determination of whether a new treatment is less efficacious or less well tol-erated than existing alternatives. The solution, he proposes, is active comparator trials designed and powered to demonstrate superiority, equivalence, or noninferiority that are run under strict oversight by the FDA. An FDA-assembled panel could recommend the best active comparator, dosing schemes, and calculation powering. Doing this “would yield highly relevant comparative efficacy and tolerability information at the time the drugs…reach the market,” Dr O’Connor argues, and that information could be included on labeling, which would be very clinically relevant.1

The clinical benefits of this approach could be tempered by the business impact, Dr O’Connor acknowledges, saying that such trials “could conceivably reduce new drug and device development due to the increased costs of trials including active comparators and the increased risk of discovering late in the development process that a new treatment is inferior to existing treatments.”1 But he points out that new treatments for which placebo-controlled trials are unethical (ie, antibiotics, anticoagulants, and chemotherapeutic agents) are continuing to be developed. In addition, an approach similar to that recommended in Europe, which would incorporate both active comparator and placebo-controlled groups simultaneously, may also be helpful, Dr O’Connor suggests.

Randall Stafford, MD, PhD, of Stanford University School of Medicine, and Caleb Alexander, MD, at the University of Chicago, also emphasized the need for changes in the FDA drug approval process in a commentary released last year, arguing that the process of placebo comparison be modified.2 They also offered additional ideas for maximizing the impact of CER, ideas that have gained traction in the research community as CER has come closer to widespread implementation. The two suggest that it is imperative to obtain comparative effectiveness information earlier in the life of a drug or device; that evidence be linked to strategies proven to modify physician behavior; that the focus of CER go beyond drugs and devices to consider lifestyle modifications and alternative therapies; and finally, that the cost implications (eg, including cost-effectiveness in coverage decisions) be considered. This last matter is perhaps the most controversial among various stakeholders, but it gets to the heart of the matter. As the authors ask, “What good is comparative effectiveness if it cannot be used to discern anything about value to clinicians, insurers, patients, and society?”2

The need to get beyond the “pharmacentric” approach of most CER studies was brought forth in another recent study. Michael Hochman, MD, of the University of Southern California, Los Angeles, and Danny McCormick, MD, MPH, of Harvard Medical School, Boston, took stock of CER studies concerning medications published between June 2008 and September 2009 in 6 leading medical journals.3 They found that just 11% of CER studies compared medications with nonpharmacologic interventions. Hochman and McCormick believe more CER should evaluate non-pharmacologic treatments and strategies. For example, is surgery, radiation, or watchful waiting the best approach for managing earlystage prostate cancer? Does early involvement of palliative care consultants in the care of patients with cancer lead to im proved patient satisfaction and outcomes? Are lifestyle changes or medications better for treating patients with mildly elevated cholesterol levels? Hochman’s and McCormick’s study also indicated that CER studies were less likely than non-CER studies to have been exclusively commercially funded (13% vs 45%), and that noncommercial entities jointly or exclusively funded 87% of the CER studies.

In addition to changed regulations and a wider-view approach, greater coordination and communication among researchers must also occur if CER is to become an everyday and essential part of clinical and health services research. A June 2009 report prepared by AcademyHealth reported a large volume of ongoing CER, but with little coordination among funders and researchers to track the type of work being done, and therefore the potential for duplication.4 Better communication would also foster awareness of the most cost-efficient study designs.

Publication Processes Also in Flux

Given the growing interest in CER, it is not surprising that medical journals are gearing up for the changes that the new approach will bring. In April, a team of authors led by Harold C. Sox, MD, the editor emeritus of the Annals of Internal Medicine and cochair of the Institute of Medicine’s Committee on Comparative Effectiveness Research Prioritization, published a statement of principles for how medical journals should specifically address CER-focused studies submitted for peer review and publication.5

Because CER will use evidence outside of what is customarily seen in randomized clinical trials, including data obtained in the course of regular practice, the authors emphasize, journal editors and their peer review processes must account for the limitations inherent in such research, including “missing data, incomplete follow-up, unmeasured biases, the potential role of chance, competing interests, and selective reporting of results.”5

The authors emphasize that CER trials should be as much like “regular” trials as possible, with publicly accessible protocols written in advance, a high degree of transparency, public accessibility, and scrupulous reporting of potential conflicts of interest by researchers. By doing so, the authors argue, the principles of CER will be more effectively achieved.

The statement is notable for its conclusion, which suggests a role for editors beyond the usual conductor and arbiter of peer review. “Medical journals are the primary evaluators and disseminators of peer-reviewed health research,” the authors state. “As such, they must ready themselves to play a crucial role in advocating for CER, advancing CER methods, and facilitating the translation of CER results into practice.”5 What is also noteworthy is that the statement appeared simultaneously in 6 other medical journals, and was endorsed by the editors of several others.

Changing Mindsets, Changing Care

Again and again, researchers writing about CER emphasize that changing the focus to this type of research is not simply an academic exercise, but a serious effort to improve patient care. And that perspective seems to be taking root in wider ways, with physicians in training now being introduced to the cost of tests they order,6 and the American College of Physicians (ACP) recently announcing plans to provide physicians and patients with evidence-based recommendations for specific interventions for a variety of clinical problems. The program, titled the High-Value, Cost- Conscious Care Initiative, will assess benefits, harms, and costs of diagnostic tests and treatments for various diseases to determine whether they provide good value. “Shared decision making between physicians and patients is an integral part of highvalue, cost-conscious care,” said Steven Weinberger, MD, deputy executive vice president and senior vicepresident, Medical Education and Publishing, ACP, in a press release announcing the program.

Efforts like these—at the physician, patient, and system levels—may be the best way to improve the healthcare system so that it offers quality, value-based care. But such improvements will likely cost a great deal of money. Although it is sometimes said that “you can’t solve a problem by throwing money at it,” in this instance, the roughly $1 billion— along with thoughtful, innovative leadership—may engender some significant problem solving.


1. O’Connor AB. Building comparative efficacy and tolerability into the FDA approval process. JAMA. 2010;303(10):979-980.

2. Alexander GC, Stafford RS. Does comparative effectiveness have a comparative edge? JAMA. 2009;301 (23):2488-2490.

3. Hochman M, McCormick D. Characteristics of published comparative effectiveness studies of medications. JAMA. 2010;303(10):951-958.

4. Holve E, Pittman P. A First Look at the Volume and Cost of Comparative Effectiveness Research in the United States. Washington, DC: AcademyHealth; June 2009.

5. Sox HC, Helfand M, Grimshaw J, Dickersin K, the PLoS Medicine Editors, et al. Comparative effectiveness research: challenges for medical journals. PLoS Med. 2010;7(4):e1000269. doi:10.1371/journal.pmed. 1000269.

6. Okie S. Teaching physicians the price of care. The New York Times. May 4, 2010:D5.