There is a vintage poster hanging in my office at work (actually there are several, but I refer to one in particular) from 1930 that depicts the turbaned head of a Rudolph Valentino–like, wizard-esque person, under which is the phrase “The Man Who Knows.” I thought it appropriate not just because I found the image amusing, the eyes haunting, but because people rely on me to know a lot of stuff. However, sometimes you can know too much, as I have recently found out.
Once upon a time, careers were made after an investigator came up with an original idea, did a study, and published the data in a respectable journal after a critical review. As pharmaceutical/biotech companies have increased their presence in clinical trials, this situation has markedly changed. The company often designs a trial, identifies investigators, and selects one to run the study. If the results are interesting, that person is often awarded with a lot more credit than perhaps merited compared with one who was actually responsible for the idea behind that trial.
In my opinion, clinical trials—from small phase II studies to larger phase III trials—may create careers and perhaps change treatment and clinical research directions somewhat prematurely. Results with new, novel therapies (even with commercially available agents) have been presented and suggested to be superior to the standard. We have only had an opportunity to review the information in 15 slides at ASH, ASCO, or some other venue, with 5 minutes of questions. We do, however, often get to see those same data, perhaps updated a bit, repeatedly at subsequent meetings. Unfortunately, this redundancy provides false validation of the results to some observers and has led to its adoption as a new standard. Despite the fact that a manuscript has yet to be accepted by a peer-reviewed journal, principal investigators are paraded around the globe by the respective pharmaceutical companies as if they represented the coming of the Messiah, which I guess they may for some companies. As one such investigator told me, “I want to be a Rock Star, just like you.” When I expressed concern that his data were potentially practice changing, yet were still based on a 3-year-old abstract, the response was, “There is no rush to publish the data, the practice is already changing without such a paper.” Gasp!!
Take another situation, in which I was asked to review a manuscript by a friend who is an associate editor of a well-respected journal. In this relatively small, single-arm, phase II study, the authors reported results with a very effective combination of agents in relapsed and refractory follicular and low-grade lymphoma. I thought the study weak for a variety of reasons, but was also unimpressed because the data were not sufficiently novel to me. I had recently reviewed other manuscripts describing studies of that regimen for the same indication for this and other journals, and had heard a similar combination presented repeatedly at a variety of meetings over the past few years, admittedly in the frontline setting. Indeed, I had completed such a trial myself, albeit the results are too premature to report. Thus, I thought to myself, not only is this information quite familiar to me, but if there were data out there in the frontline setting, why should anyone be terribly interested in relapsed patients (see the July 2011 letter in CAHO on Cheson’s Rule of Drug Development)? The editor challenged my conclusion that the regimen lacked originality as he was unable to locate any published articles. I realized that he was absolutely correct. There really were no such publications yet: just the repeated abstracts, posters, and meeting presentations regarding that regimen over the past few years, so that, to me, it was totally old news. Does that by itself make this paper unpublishable in a high-quality journal? I leave that to an editorial decision. Nevertheless, despite the lack of a publication with formal peer-review, the regimen is becoming the backbone of future strategies. How should my repeated exposure to the results over several years influence my review of the manuscript?
Please do not take my comments to suggest that I do not believe whatever study data you may suspect that I am referring to, nor that I haven’t adopted such regimens in my clinic or in planning my future trials. I merely caution the practitioner regarding regimens that seem to take on a life of their own because of redundancy, not impartial review.
Clinical research should be conducted by enthusiasts, but reviewed by skeptics. Review is the critical word. Care should be taken not to prematurely adopt unpublished data nor deify the investigators presenting them until we all have an opportunity to become the ones who know.
Until next month . . .
Bruce D. Cheson, MD