Review of group-based cancer trials reveals flaws in studies’ design and analysis

COLUMBUS, Ohio – A new study reviewing 75 group-randomized cancer trials over a five-year stretch shows that fewer than half of those studies used appropriate statistical methods to analyze the results. The review suggests that some trials may have reported that interventions to prevent disease or reduce cancer risks were effective when in fact they might not have been.

More than a third of the trials contained statistical analyses that the reviewers considered inappropriate to assess the effects of an intervention being studied. And 88 percent of those studies reported statistically significant intervention effects that, because of analysis flaws, could be misleading to scientists and policymakers, the review authors say.

“We cannot say any specific studies are wrong. We can say that the analysis used in many of the papers suggests that some of them probably were overstating the significance of their findings,” said David Murray, lead author of the review study and professor and chair of epidemiology in the College of Public Health at Ohio State University.

“If researchers use the wrong methods, and claim an approach was effective, other people will start using that approach. And if it really wasn’t effective, then they’re wasting time, money and resources and going down a path that they shouldn’t be going down.”

Murray and colleagues call for investigators to collaborate with statisticians familiar with group-randomized study methods and for funding agencies and journal editors to ensure that such studies show evidence of proper design planning and data analysis.

The review appears online in the Journal of the National Cancer Institute.

In group-randomized trials, researchers randomly assign identifiable groups to specific conditions and observe outcomes for members of those groups to assess the effects of an intervention under study.

These trials are used to investigate interventions that operate at a group level, manipulate the social or physical environment, or cannot be delivered to individuals in the same way a pill or surgical procedure can. For example, a group-randomized trial might study the use of mass media to promote cancer screenings and then assess how many screenings result among groups that receive different kinds of messages.

In analyzing the outcomes of such trials, researchers should take into account any similarities among group members or any common influences affecting the members of the same group, Murray said. But too often, this review found that the common ground among group members was not factored into the final statistical analysis.

What can result is called a Type 1 error, when a difference between outcomes in groups is found that doesn’t really exist.

“In science, generally, we allow for being wrong 5 percent of the time. If you use the wrong analysis methods with this kind of study, you might be wrong half the time. We’re not going to advance science if we’re wrong half the time,” said Murray, also a member of the Cancer Control Program in Ohio State’s Comprehensive Cancer Center.

The review identified 75 articles published in 41 journals that reported intervention results based on group-randomized trials related to cancer or cancer risk factors from 2002 to 2006. Thirty-four of the articles, or 45 percent, reported the use of appropriate methods used to analyze the results. Twenty-six articles, or 35 percent, reported only inappropriate methods were used in the statistical analysis. Eight percent of the articles used a combination of appropriate and inappropriate methods, and nine articles had insufficient information to even judge whether the analytic methods were appropriate or not.

“Am I surprised by these findings” No, because we have done reviews in other areas and have seen similar patterns,” Murray said. “It’s not worse in cancer than anywhere else, but it’s also not better. What we’re trying to do is simply raise the awareness of the research community that you need to attend to these special problems that we have with this kind of design.”

The use of inappropriate analysis methods is not considered willful or in any way designed to skew results of a trial, Murray noted.

“I’ve seen creative reasons people give in their papers for using the methods they use, but I’ve never seen anybody say it was done to get a more significant effect. But that’s what can happen if you use the wrong methods and that’s the danger,” he said. “What we want to know from a trial is what really happened. If an intervention doesn’t work, we need to know that, too, so we can try something else.”

The review also is not an indictment of the study design. Murray is a proponent of such trials and was the first U.S. expert to author a textbook on the subject (Design and Analysis of Group-Randomized Trials, Oxford University Press, 1998).

He also is a co-investigator on three group-randomized trials in progress at Ohio State. Two trials use specific clinics as the assigned groups. One is analyzing the effectiveness of having specially trained guides help cancer patients negotiate the health-care system. The second is investigating the effectiveness of aggressive physician promotion of colorectal cancer screening for patients with cancer risk factors. A third trial will use Appalachian counties as groups to compare the effectiveness of a media campaign to promote colorectal cancer screenings.

“We’re not trying to discourage people from using this design. It remains the best design available if you have an intervention that can’t be studied at the individual level,” Murray said.

###

This review study was supported by grants from the National Cancer Institute and the American Cancer Society.

Murray conducted the review with Sherri Pals of the Centers for Disease Control and Prevention; Jonathan Blitstein of RTI International in Research Triangle Park, N.C.; Catherine Alfano of the Division of Health Behavior and Health Promotion in Ohio State’s College of Public Health; and Jennifer Lehman of Ohio State’s Department of Family Medicine.

Contact: David Murray, (614) 293-2928; dmurray@cph.osu.edu

Written by Emily Caldwell, (614) 292-8310; Caldwell.151@osu.edu

Ads for SSRI antidepressants are misleading, say researchers

Consumer ads for a class of antidepressants called SSRIs often claim that depression is due to a chemical imbalance in the brain, and that SSRIs correct this imbalance, but these claims are not supported by scientific evidence, say researchers in PLoS Medicine.

Although scientists in the 1960s suggested that depression may be linked to low brain levels of the chemical serotonin (the so-called “serotonin hypothesis”), contemporary research has failed to confirm the hypothesis, they say.

The researchers–Jeffrey Lacasse, a doctoral candidate at Florida State University and Dr. Jonathan Leo, a neuroanatomy professor at Lake Erie College of Osteopathic Medicine–studied US consumer advertisements for SSRIs from print, television, and the Internet.  They found widespread claims that SSRIs restore the serotonin balance of the brain. “Yet there is no such thing as a scientifically established correct ‘balance’ of serotonin,” the authors say.

According to Lacasse and Leo, in the scientific literature it is openly admitted that the serotonin hypothesis remains unconfirmed and that there is “a growing body of medical literature casting doubt on the serotonin hypothesis,” which is not reflected in the consumer ads.

For instance, the widely televised animated Zoloft (setraline) commercials have dramatized a serotonin imbalance and stated, “Prescription Zoloft works to correct this imbalance.” Advertisements for other SSRIs, such as Prozac (fluoxetine), Paxil (paroxetine), and Lexapro (escitalopram), have made similar claims.

In the US, the FDA is responsible for regulating consumer advertisements, and requires that they be based on scientific evidence. Yet, according to Lacasse and Leo, the mismatch between the scientific literature and the SSRI advertisements is “remarkable, and possibly unparalleled.”

And while the Irish equivalent of the FDA, the Irish Medicines Board, recently banned GlaxoSmithKline from claiming in their patient information leaflets that paroxetine (Paxil) corrects a chemical imbalance, the FDA has never taken any similar action on this issue.

Commenting on Lacasse and Leo’s work, Professor David Healy of the North Wales Department of Psychological Medicine, said: “The serotonin theory of depression is comparable to the masturbatory theory of insanity.  Both have been depletion theories, both have survived in spite of the evidence, both contain an implicit message as to what people ought to do.  In the case of these myths, the key question is whose interests are being served by a widespread promulgation of such views rather than how do we test this theory.”

Dr Joanna Moncrieff, Senior Lecturer in Psychiatry at University College London, said: “It is high time that it was stated clearly that the serotonin imbalance theory of depression is not supported by the scientific evidence or by expert opinion.  Through misleading publicity the pharmaceutical industry has helped to ensure that most of the general public is unaware of this.”

A Common Microbe Could Help To Trigger Alzheimers

 

A COMMON microbe could help to trigger Alzheimer’s disease, say researchers in the US. If true, their controversial claim could turn the multimillion-dollar field of Alzheimer’s research on its head and force a rethink on how to prevent the disease.

The microbe in question is Chlamydia pneumoniae, which is spread by coughs and sneezes. By the age of 20, half the population have been infected with C. pneumoniae, and the likelihood of being infected increases with age. The bacterium has already been accused of triggering atherosclerosis-blocked arteries that can lead to heart attacks (“Can you catch a heart attack?”, New Scientist, 8 June 1996, p 38).

Alan Hudson at Wayne State University in Detroit and his colleagues did postmortems on the brains of 19 Alzheimer’s patients and 19 people of the same age who had died of other causes. They found signs of C. pneumoniae in 17 of the Alzheimer’s sufferers, in the hippocampus and temporal cortex. These are the parts of the brain which usually sustain most damage in Alzheimer’s disease. Unaffected areas of the brain were much less likely to harbour the bacterium. The bacterium turned up in the brain of only one of the non-Alzheimer’s patients.

The team also managed to culture the microbe from two of the affected brains, showing that the organism was still alive rather than a long-dead bystander (Medical Microbiology and Immunology, vol 187, p 23).

C. pneumoniae’s presence in the diseased brains does not mean that it cause Alzheimer’s, the scientists stress. But they think the bacterium may at least be a risk factor. Chlamydia bacteria do cause inflammation when they attack other parts of the body. And the brains of people with Alzheimer’s are inflamed and contain high levels of messenger chemicals called cytokines, which trigger inflammation.

Hudson says the bacterium infects microglia and astroglia, the cerebral cousins of scavenger cells called macrophages, and this produces inflammatory cytokines. “It seems reasonably likely that C. pneumoniae could be causing the inflammation,” says Hudson.

Author: Phyllida Brown New Scientist issue 15th August 1998, page 24

PLEASE MENTION NEW SCIENTIST AS THE SOURCE OF THIS ARTICLE