Sometimes, when I see the scientific literature, I find myself wondering: why, exactly, do advocates of “science-based medicine” hold it so holy? The recent story about mobile phones and brain activity is the latest example. (And yes, I hold mainstream, scientific medicine dear too, and it is and will remain my first resort for any illness. But I am bothered by the state of biomedical research today, and what it may mean for the future.)
“Evidence-based medicine” is — to put it briefly — the rather reasonable-sounding notion that the best available evidence on the efficacy of various treatments should be used to inform the course of action in individual cases. For some people, this is not enough: proponents of what they call science-based medicine. It is hard to find a specific definition of this in the writings of these proponents, or on their webpages, but it seems that what they are saying is, not only should efficacy as reported in statistical trials be considered, but also the prior probability, based on what we know of science, that the treatment works. In other words, they are Bayesians. Their problem with evidence-based medicine (EBM) is that it “favors equivocal clinical trial data over basic science, even if the latter is both firmly established and refutes the clinical claim” [Kimball Atwood].
I myself am a Bayesian (which means my position is that poorly-estimated priors are better than no priors at all, and “maximum likelihood” is almost always the wrong approach), and I agree with much of what they say — in fact, their comments on the p-value fallacy should be required reading for all biomedical researchers: there is no single concept in all of statistics that is more misused in all of science. Nonetheless, there are two problems in the approach of the advocates of SBM. The first is that estimating priors is hard, and therefore they mostly go for the easy target of homeopathy, whose underlying principles are in sufficiently strong conflict with everything we know of science that they can put a “prior probability” of zero with little argument. But, as I discuss below, there remains a problem even in that case. The second is their nomenclature, which suggests that because something has been published in a science journal, it is “science-based” — even if that is not exactly what they are saying (they do criticise the scientific literature regularly).
Anyway: the health story du jour (or du mois) seems to be on mobile phones: supposedly some researchers at NIH have shown that a mobile phone, kept next to the ear, increases brain activity. There is much media coverage on the net; the NYT’s Well blog reviews it here. The original paper is here (abstract only, but I’ve had a look at the full text).
Briefly, the story is this: the researchers strapped mobile phones to both ears of 47 healthy participants. All phones were silenced. One phone for each participant was turned off, and a call was made to the other for 50 minutes, but it was silenced so that the participant could not know which phone was receiving the call. Brain glucose metabolism was measured for the last 30 of these minutes, and a significant increase was found “near the antenna” of the phone.
The first obvious question to me was: what if this is not because of radiation from the phone, but simply because of the heating of the device? It doesn’t seem unlikely to me that temperature causes changes in glucose metabolism. Any mobile phone that is used for more than a few minutes becomes significantly warm. This is not discussed in the paper at all (thermal effects of phone radiation are mentioned and dismissed, but my point is the warmth of the phone itself). The point is briefly mentioned in the Well blog: reportedly, the researchers say this is “unlikely” because it occurred “near the antenna rather than where the phone touched the head”. But this is bizarre, because the paper says these were Samsung SCH-U310 phones (these are hinged flip-phones), and “cell phones were placed over each ear with microphones directed toward the participant’s mouth and were secured to the head using a muffler that did not interfere with the lower part of the cell phone, where the antenna is located.” This sounds like the antenna was nowhere near the brain, and the parts of the brain that were closest to the antenna were precisely the parts closest to the earpiece: the orbitofrontal cortex and the temporal pole.
So, “unlikely” is hardly an adequate answer in this case. A control could easily have been done to eliminate this possibility: perform trials with fake phones that contained heating devices but emitted no radiation. Why wasn’t this done?
It need not even be that the heat directly increased the glucose metabolism. The heat would certainly alert the participant as to which ear the phone was strapped on. This would, in turn, likely affect brain activity — especially as the orbitofrontal cortex is involved in decision-making, emotion and reward — all of which are significant in participants in a trial. Whether such changes in activity would occur predominantly on the same side as the phone, I can’t say, but I doubt neuroscience is sufficiently advanced to answer that question. But this was even easier to control for: just ask the participants to guess which ear, if either, had the “live” phone, and see if the guesses were better than random.
The lack of such controls, alone, makes this experiment entirely worthless — but I can’t help suspecting worse: the controls are so obvious that one suspects they weren’t done for a reason. (Or even worse, done and not reported.)
Unfortunately it is not a unique example. For a while now, John Ioannidis has been arguing, convincingly, that “most published research findings are false” [PLoS Medicine], and medical statistics are “lies, damned lies” [The Atlantic], precisely because of these sorts of problems in statistics, analysis and controls. The proponents of “science-based medicine” are certainly aware of Ioannidis’s work and in fact claim to agree with it [Steve Novella]. But Novella argues, unconvincingly in my opinion, that “systemic problems”, “poor design”, small size, bias to positive results, are not “the primary reason” for Ioannidis’s observations. Effectly, he says p-values are to blame again, and prior probabilities would solve the problem. In the mobile-phone example above, p-values have nothing to do with it, and — though some physicists have claimed that it is impossible for mobile phone radiation to affect the brain, because the frequencies don’t correspond to energies that can affect chemical bonds — few scientists would put a very low prior probability on other effects of radiation. The problem, here and — I expect — in a huge number of other studies, is inadequate detail and the ignoring of other possibilities.
This is an example of a study that was clearly of general interest, and bound to receive tremendous scrutiny, and yet it survived the review process and has received little critical attention after publication in one of the most prestigious medical journals: the media coverage that I have seen has been unquestioning. The vast majority of scientific papers receive far less attention, so it would be surprising if their supporting evidence was much better. The problems are (1) a tremendous pressure on scientists to publish, and on journals to be newsworthy (this leads to fiascos such as the arsenic-eating bacteria story — but that story seems to have self-corrected itself, as science is supposed to do), combined with (2) inadequate resources for adequate pre-publication peer review and insufficient attention to most papers post-publication. It seems to me that biomedical research suffers particularly from these problems, and as long as the “science-based medicine” community doesn’t acknowledge the situation, it will get worse.
Back to homeopathy. Its principles are that like cures like, and dilution strengthens the medicine. These principles are indeed rubbish, and the sort of dilutions (12C or more) often described would ensure that not a single molecule of the medicine remains in the mixture. So the prior scientific probability that it works is zero, right? Not quite, for two reasons.
First, placebos can work: this has been known for a while, but this recent study suggested that patients who were aware that they were being given placebos still showed improvement. The patients were told that there was no medicine in the placebo, but it may still create an effect via a “mind-body interaction”. We really do not understand the mind-body interaction at all, and it would be silly to put prior probabilities on it. Of course, this means a homeopath need not be careful about labelling bottles: what matters is not what “medicine” the homeopath gives the patient but what the homeopath tells the patient. (I have actually heard this claim made in seriousness, from a pro-homeopathy viewpoint. It is unlikely to have been Hahnemann’s view, but I’m sure there are practising homeopaths who do not treat all of Hahnemann’s utterances as Biblical truth.)
But the other problem is that homeopathy, as actually practised — at least here, in India — need not be infinitely diluted at all. (Again, perhaps some homeopaths think for themselves and are not convinced about infinite dilution.) Some of the medicines I saw recently had dilutions like 6X (one part in a million), hardly a vanishing dilution, and were extracts of plants (like Asoka) that are traditionally regarded as medicinal in India. So they could in fact be having an effect by a non-placebo mechanism.
It is fashionable for SBM advocates to dismiss traditional/herbal medicines too on the grounds that the theories (qi, doshas, and so on) are nonsense, and the preparations are not scientifically validated. To a large extent I agree: there is not enough systematisation, there are too many dubious practitioners who add steroids and heavy metals to their “ayurvedic” medicines, and so on. But utterly wrong theories can still give correct results: and in many cases, the choice of plant-based remedies is based not on mysticism but on centuries of observation. Every Indian knows the value of neem, turmeric, tulsi, and other household plants and spices. It is not a priori improbable that lesser-known plants, that have been described for centuries as having specific benefits for specific ailments, may actually have some of those benefits.
There are problems in both the theory and the practice of modern, “scientific” medicine. But, despite these problems, there is no doubt that scientific medicine is, on the whole, more likely to cure you in most cases, is the only recourse for deadly diseases like cancer, and is solely responsible for blunting or eliminating many infectious diseases — smallpox, polio, tetanus, and many more — that have plagued humanity for centuries. Nevertheless, I think a little humility would befit some practitioners of “science-based medicine” when talking about traditional systems. Science, in theory, is neat, orderly and systematic. In practice, it’s as human as any other endeavour. And every scientist knows that. But it is also meant to be “self-correcting” over time. My fear is that this may no longer be true. We need to take people like Ioannidis seriously, and not dismiss his findings as statistical consequences of large numbers.