information resources

"I do my own research"

Something that I see a lot in on-line debates about alternative medicine is phrases like “I did my own research” or “people should be allowed to do their own research and make their own decisions”

However, I don’t think that the vast majority of people are able to do their own research. Now, that’s probably a pretty unpopular opinion. It’s patronising, paternalistic, and it flies in the face of patient choice. Who am I to question the intelligence and abilities of other people? Why do I think I'm so clever compared to anyone else out there? Allow me to explain myself.

I've been a pharmacist for a very long time now. From uni, through pre-reg, to my own revision at work, I've been taught critical appraisal skills. Yet to this day, it’s something that I actually find really hard work. It’s a skill that requires continual honing, and every time I use it I feel like I am fighting with my brain. 

Even in the last two weeks, I've been revisiting my critical appraisal skills to make sure they are up to date. I've done some in-house work, three on-line courses, and a one to one training session. Yet I still find myself sat here at my desk for several hours, if not days, looking over the same study with a furrowed brow, desperately trying to make the numbers and statistics tell me their story.  If I find it so hard, then how on earth is someone without any medical background or critical appraisal training supposed to do any of it? 

There’s hazard ratios, odds ratios, confidence intervals, numbers needed to treat, event rates, absolute risks and other confuddling terms to deal with. I naturally struggle with numbers at the best of times; like most people, I much prefer narratives. That means that I have to constantly argue with myself to keep looking at the results page, rather than just flicking to the discussion. Because if I did that, I'd be relying on what the authors, with all of their possible biases and agendas, say their numbers say. Then, when I eventually manage to squeeze the swimming mass of figures into some sort of order in my head, I find out that these numbers aren't the full story, and I need to dig even deeper into other analyses of the same figures to find out what’s really going on.* 

A quick and very simplistic visualisation of all the layers of interpretation that might lead to information found on your common or garden health information website. That's a whole lot of bias.

A quick and very simplistic visualisation of all the layers of interpretation that might lead to information found on your common or garden health information website. That's a whole lot of bias.

It’s not a pleasant task by any stretch of the imagination. It really does feel like a mental marathon. I often question whether I am even up to the task- I can end up feeling stupid, and confused. But in order to really figure out whether or not a drug works I need to strip away all the levels of other peoples’ interpretation and start from scratch, with the cold, hard, impersonal numbers. That way I can build my own narrative, uninfluenced by what the study’s authors or sponsors want me to think, by what newspapers want me to believe, by what campaigners want me to know. The only way to know the truth is to start right at the bottom, in a dark dank pit of statistics, then to slowly start building yourself a ladder until you emerge, blinking, into the pleasant knowledge that you've worked out what on earth is going on.

This sort of raw data is not only extremely hard to deal with once it’s in front of you, but its also pretty difficult to come by. Finding it in the first place includes searching multiple medical databases- and these things aren't just a quick free text search like you would do on Google. Constructing a search can in itself take an hour or so, and then you have to trawl through the results to decide which are relevant to what you are specifically looking for. For me, most of the time, a question is structured like this:

What is the evidence that [drug/ group of drugs] works for [disease] in [patient group

 So, in my poorly drawn Venn diagram below, I need to find those holy grail papers that reside in the pink area:

I am truly terrible at MS paint, but you get the idea.

I am truly terrible at MS paint, but you get the idea.

What a typical EMBASE search looks like. This is for a new drug with few synonyms so its a fairly straightforward one. Others can have forty odd lines of searches.

What a typical EMBASE search looks like. This is for a new drug with few synonyms so its a fairly straightforward one. Others can have forty odd lines of searches.

Some of these papers might be pay-walled, so it’ll take me a week or so to get my hands on them. Some of them might initially look promising, but once you start to dig down into the figures you see that there might actually be problems with how they were undertaken or reported, or they might turn out to not quite fit in some way- perhaps the dose they used in the trial is different to the licensed dose in the UK, or the people enrolled into the trial don’t quite fit the population you want to know about, or perhaps the trial just didn't recruit enough people so any results from it are invalidated.

I've been doing this job for years, and I really do still struggle with all of this stuff. That’s not because I'm poor at my job, or because I'm stupid, or because I haven’t put the effort in to understand it. It’s because, when it comes down to it, this stuff is really bloody hard. It’s time-consuming, boring, and unintuitive.

People might well feel like they've done their own research. They might spend several hours digging about on the internet and feel empowered by any decisions that they make. But what they don’t realise is that what they've been researching isn't just the information- it’s the information with many, many layers of interpretation (and therefore bias) added. For a choice to be truly informed, you need to go right back to the start, to those terrifying tables of numbers and statistics. That’s simply not realistic for the majority of people.

Far better, then, to learn how to decide on whose interpretation you’re going to rely on. Will it be those that take the media reports at face value, or who have an agenda or a product to sell you? Or will you go with those that have years of training in how to pull apart complicated data and disseminate it in understandable ways?


*I thought I’d give you a quick real life example here, but I thought it best to asterisk it because I've probably bored you enough already. I'm currently looking at a drug called edoxaban and its use in reducing the risk of stroke in patients with atrial fibrillation. It’s the newest in a series of novel oral anticoagulant drug- they’re supposedly like warfarin, but less faffy. So I find and look at the main trial, and spend days unpicking the stats. It looks like both strengths used in the trial are no worse than warfarin, and the higher dose might even be a little better. Great, right?

Well, that’s not quite the end of the story. Because it turns out- and this isn't reported in the trial at all, but instead is contained in the FDA’s briefing document- that in people with fully working kidneys, edoxaban is actually worse than plain old warfarin. In people whose kidney’s aren't quite at full capacity though, it might work better than warfarin. So the overall trial results are kind of skewed, and if we didn't dig deeper, we might have been giving a whole group of people a more expensive drug with worse outcomes than warfarin. Even the FDA findings are borderline- some of what they describe doesn't reach statistical significance.

A comparison between medical and homeopathic information sources

Recently, I’ve been delving back into the world of homeopathy, and all of the nonsense that it entails.

Part of my research and preparation has been consulting homeopathic texts- materia medica and repertories that are still in use by modern homeopaths.

One thing that I have been repeatedly struck by is the stark differences in the quality of these information sources compared to those used in modern medicine. Let’s take a look at some of those differences.

Up To Date?

Part of my day job’s role is resource management. This means that I need to make sure that all of the resources that we use and have access to are present and up to date. Whenever I use a book as part of my work, I document which edition I have used. If I use a website, I make sure to include when it was last updated. When we get a new book in the office, I find the old copy and cover it in stickers saying “Out of date- do not use”.

I don’t do these things because I am weird, or because I enjoy it. I do it to ensure that I give the most accurate, up to date information so that the patient gets the best care. What we know about medicines is constantly evolving- new medicines, new safety information, and new evidence is emerging daily. What might have been correct to the best of our knowledge last year may now have been subsumed by more recent experiments, and so the information sources I use change accordingly. So, for example, I can reach for a copy of the British National Formulary from 2005, and find information that recommends sibutramine as a weight loss aid in certain patients. However, if I look at the current version, I won’t see it in there, as it has since been withdrawn for safety reasons. If I were to have used the 2005 copy to advise a patient, I might have given them the wrong advice, in the context of what we know today.

How up to date is the information used by homeopaths? According to The Homeopathic Pharmacy (Kayne, S. 2nd Edition, published in 2006 by Elsevier Churchill Livingstone, page 192- I did warn you about the documentation): ‘The most well known are Boericke’s Materia Medica with repertory and Kent’s Repertory of the Homeopathic Materia Medica’. Sadly, the author of this book doesn’t see fit to bother telling us when these were published. Neither does the online version of it, although there is a bit of a hint in that the “Preface to the ninth edition” on there is signed off by William Boericke in 1927.

Nineteen Twenty Seven. Medicine and healthcare is a pretty fast-paced industry, with new innovations and information coming out at an overwhelming rate. So much has happened in medicine since 1927 that there is no way that anyone should accept health care advice based on something written from that time. I know I certainly wouldn’t be too happy if my GP gave me health advice from a dusty tome, or if I went to the dentist’s to find them using equipment from the 1920’s.

Maybe Kent’s Repertory will be more up to date? A Quick look at the website gives us no clues. This time, the preface contains no date at all. The closest thing that we have to a publishing date is the fact that the website is copyright 1998, and appears to have been formatted by a default-font loving child in the early nineties.

Political Correctness

Over the years, medical terminology has changed and evolved along with society and scientific discoveries, and rightly so. In some cases, words that used to be considered as perfectly legitimate scientific terminology (such as ‘Mongol’, or ‘Mongoloid Idiocy’, used to describe a person with Down syndrome) are now considered downright offensive. Even whole swathes of what is now considered normal society (such as gay people) were once declared as illnesses- and of course we know better by now, or at least we should do, and if you don’t- grow the hell up, will you. We generally don’t refer to people as “hysterical”, or “insane” anymore, as we know a lot more about such conditions, so are able to categorise people more helpfully and professionally.

As a result, we healthcare professionals are very aware of how crucial the use of clear, concise, professional communication is, including the information in our resources. No self-respecting modern medical text would ever dream of using out-dated, offensive terms, and if it did, there would be an outcry.

Let’s have a look at the sort of thing that Boericke’s Repertory wants to help us to treat. There are things like “Brain-Fag”, “Cretinism”, “Masturbatic dementia”, “Fears of syphilis”, “hysteria”, “insanity”, “weak memory from sexual abuse”, “Haughty”, “Stupid”, and many others. These were just taken from the “Mind” section, but there are many other examples in the other sections too. These terms are just too outdated and are wholly inappropriate to be used in today’s society.

Having looked through various other Materia Medica entries too, I’ve found statements that are sexist, bigoted, and occasionally racist. Nice eh? You don’t find that sort of thing in an up-to-date copy of Martindale: The Complete Drug Reference.


Good, modern medical resources are all about clarity. They need to be- after all if someone gives the wrong medical advice because they have interpreted something incorrectly, patients could be at risk.

Jargon is sometimes necessary, but nowadays medical jargon tends to use standardized, accepted terminology which keeps the risks of misinterpretation to a minimum.

Homeopathic repertories and material medica, on the other hand, are full of vague, odd terms which are massively open to interpretation. What, pray tell, is a “voluptuous, tingling female genitalia” when it is at home? (and I wonder whether Ann Summers offers free delivery on such a thing?). What does “expectoration, taste, herbaceous” mean clinically? How is one supposed to diagnose “Taedium vitae”? When would you class a person as “Obscene, amative”, and when would they be considered as merely “gay, frolicsome, hilarious”?

In Conclusion

Our health is arguably the most important asset that we have. Why would we entrust it to sources which are terribly out of date, inaccurate, and in some cases, offensive?

Homeopaths like to paint themselves as a caring, human alternative to the more business-like, clinical world of real health-care professionals. But when this alternative categorises people as being “stupid”, or “cretinous”, and is happy to use criminally out of date resources which can risk peoples’ health, I wonder just how caring and ethical it really can be.  

I've said this before, and I'll say it again: why would you continue to use an abacus when calculators exist, and are proven to have a better record at getting the right answer?


The Society of Homeopaths and what they pass off as evidence

So today has seen some great news for rationality, science, and above all patients. The ASA has announced this ruling, leading to the Society of Homepaths taking down a rather large chunk of their website- the bit about what homeopathy can be used for.

However, using their search function, you can still find some of the nonsense they are promoting. I stumbled across this article, for example, entitled "Homeopathy Offers Alternative Relief for hay fever sufferers". I'd be very surprised if this article doesn't get taken down soon also, to be honest. It really should, given part of the ASA's ruling relates to their claims over the efficacy of homeopathy for hay fever. 

That use of the word alternative (as opposed to complementary) is interesting. That in itself suggests that the Society of Homeopaths are advocating patients not using conventional medicines in favour of their homeopathic products.

One thing that I have learnt about homeopaths is that, despite the fact that they often claim that randomised controlled trials (and indeed science in general) can't explain their wondrous treatment because of its individualised nature and quantum nanoparticles blah blah all the other words that they're clinging onto, they like to cite trials. A lot.

Homeopaths will often spout names of trials and provide links to PubMed abstracts with abandon, even when the trials say little about the clinical use of homeopathy in humans. In vitro or animal trials are favourites, and on the odd occasion where I have been sent a human trial, the result usually show that homeopathy is no better than placebo, and in some cases actually worse than placebo. At best, I'm guessing this is just ignorance- maybe they have misread the results of the trial? At worst (and more realistically), its a pretty obvious and petty method of obfuscation, and a pretty rubbish one at that. Presumably they think I will be so vowed by the fact that a trial exists that I wont bother to check the actual results of what the trial is saying.

This hayfever page overs a great example of this:

"A number of research trials have shown that homeopathic treatment can produce a significant improvement in hay fever symptoms,(4-7) but what does this involve?"

 Let's have a look at the "number of research trials", shall we?  

  • Reilly DT, Taylor MA, McSharry C, Aitchison T. Is homeopathy a placebo response?  Controlled trial of homeopathic potency, with pollen in hayfever as a model. Lancet,1986;2: 881-6.

This is a trial from 1986. Really, that's the best they can do, in 2013? The abstract of this trial appears impressive: "The homoeopathically treated patients showed a significant reduction in patient and doctor assessed symptom scores", but neglects to mention the most important part of a study like this: blinding. How can we assess the placebo effect in a study that isn't blinded? especially when the results rely on only reported outcomes. We can tick this one off the list as being a pretty rubbish effort at a trial. Must try harder. 

  •  Kleijnen J, Knipschild P, ter Riet G. Clinical trials of Homeopathy. Br Med J, 1991; 302:316-22.

Ahh, the early nineties. We're getting thoroughly modern and hip now, eh? This is a meta-analysis. hay fever isn't mentioned in the abstract at all, and the conclusion of the paper is that studies performed in homeopathy are rubbish, and better ones need to be done. Hardly a conclusive statement that homeopathy works for hay fever. We can tick this one off the list too.

We're now left with two trials to back up that statement above. To me, two trials is not "a number" of trials, even at this point. Even if these two trials were massive, robust, good quality randomised controlled trials, I still wouldn't be entirely convinced: I'd want to see the result replicated in as many different trials as possible. Anyway, we shalll soldier on, in the hopes of being dazzled by the brilliance of these two references. 

  • Launsø L, Kimby CK, Henningsen I, Fønnebø V. An exploratory retrospective study of people suffering from hypersensitivity illness who attend medical or classical homeopathic treatment. Homeopathy, 2006; 95: 73-80.

Oh dear. A retrospective study. So not a controlled trial at all then? The results? "The two groups of patients were similar in respect of their health at the start of the treatment, 57% of the patients who consulted a CH experienced an improvement of their state of health compared to 24% of the GP patients." well, that's all very well and good, but there is no blinding here whatsoever, and only 88 patients completed the study. means nothing at all, except for- as even the authors put it- as an exploratory study, maybe to try to find ways of how to conduct as more robust actual trial in the future.

That's it, down to the final trial. I'm expecting great things.  

  • Kim LS, Riedlinger JE, Baldwin CM, Hilli L, Khalsa SV, Messer SA, Waters RF. Treatment of Seasonal Allergic Rhinitis Using Homeopathic Preparation of Common Allergens in the Southwest Region of the US: A Randomized, Controlled Clinical Trial. Ann Pharmacother, 2005; 39(4): 617-24.

HURRAH!!! It's double-blind! We've gotten there! We've gotten some good, robust evidence tha- oh hang on, its only got 40 participants in it. It's just a wee ickle study that's far too small to draw any conclusions from.

So there you have it. This page is still up there on their website, using crappy references that don't back up their claims. The Society of Homeopaths- and quacks in general- need to realise that, no matter how hard they try, just trying to shoehorn poor excuses for studies in wherever they like isn't good enough.

Here's how it should go: you look at the evidence, you evaluate the evidence, and you make your claim on the basis of that evidence. Not: "I shall claim this, then try desperately to find something that vaguely looks like it backs me up, and I'll just hope for the best that no-one else bothers reading it. It seems the Society of Homeopaths are going in for the latter, and good on the ASA for pulling them up on it.

"We told the Society of Homeopaths not to discourage essential treatment for conditions for which medical supervision should be sought, including offering specific advice on or treatment for such conditions. We also told them not to make health claims for homeopathy unless they held sufficiently robust evidence of efficacy." -ASA ruling