Something that I see a lot in on-line debates about alternative medicine is phrases like “I did my own research” or “people should be allowed to do their own research and make their own decisions”
However, I don’t think that the vast majority of people are able to do their own research. Now, that’s probably a pretty unpopular opinion. It’s patronising, paternalistic, and it flies in the face of patient choice. Who am I to question the intelligence and abilities of other people? Why do I think I'm so clever compared to anyone else out there? Allow me to explain myself.
I've been a pharmacist for a very long time now. From uni, through pre-reg, to my own revision at work, I've been taught critical appraisal skills. Yet to this day, it’s something that I actually find really hard work. It’s a skill that requires continual honing, and every time I use it I feel like I am fighting with my brain.
Even in the last two weeks, I've been revisiting my critical appraisal skills to make sure they are up to date. I've done some in-house work, three on-line courses, and a one to one training session. Yet I still find myself sat here at my desk for several hours, if not days, looking over the same study with a furrowed brow, desperately trying to make the numbers and statistics tell me their story. If I find it so hard, then how on earth is someone without any medical background or critical appraisal training supposed to do any of it?
There’s hazard ratios, odds ratios, confidence intervals, numbers needed to treat, event rates, absolute risks and other confuddling terms to deal with. I naturally struggle with numbers at the best of times; like most people, I much prefer narratives. That means that I have to constantly argue with myself to keep looking at the results page, rather than just flicking to the discussion. Because if I did that, I'd be relying on what the authors, with all of their possible biases and agendas, say their numbers say. Then, when I eventually manage to squeeze the swimming mass of figures into some sort of order in my head, I find out that these numbers aren't the full story, and I need to dig even deeper into other analyses of the same figures to find out what’s really going on.*
It’s not a pleasant task by any stretch of the imagination. It really does feel like a mental marathon. I often question whether I am even up to the task- I can end up feeling stupid, and confused. But in order to really figure out whether or not a drug works I need to strip away all the levels of other peoples’ interpretation and start from scratch, with the cold, hard, impersonal numbers. That way I can build my own narrative, uninfluenced by what the study’s authors or sponsors want me to think, by what newspapers want me to believe, by what campaigners want me to know. The only way to know the truth is to start right at the bottom, in a dark dank pit of statistics, then to slowly start building yourself a ladder until you emerge, blinking, into the pleasant knowledge that you've worked out what on earth is going on.
This sort of raw data is not only extremely hard to deal with once it’s in front of you, but its also pretty difficult to come by. Finding it in the first place includes searching multiple medical databases- and these things aren't just a quick free text search like you would do on Google. Constructing a search can in itself take an hour or so, and then you have to trawl through the results to decide which are relevant to what you are specifically looking for. For me, most of the time, a question is structured like this:
What is the evidence that [drug/ group of drugs] works for [disease] in [patient group
So, in my poorly drawn Venn diagram below, I need to find those holy grail papers that reside in the pink area:
Some of these papers might be pay-walled, so it’ll take me a week or so to get my hands on them. Some of them might initially look promising, but once you start to dig down into the figures you see that there might actually be problems with how they were undertaken or reported, or they might turn out to not quite fit in some way- perhaps the dose they used in the trial is different to the licensed dose in the UK, or the people enrolled into the trial don’t quite fit the population you want to know about, or perhaps the trial just didn't recruit enough people so any results from it are invalidated.
I've been doing this job for years, and I really do still struggle with all of this stuff. That’s not because I'm poor at my job, or because I'm stupid, or because I haven’t put the effort in to understand it. It’s because, when it comes down to it, this stuff is really bloody hard. It’s time-consuming, boring, and unintuitive.
People might well feel like they've done their own research. They might spend several hours digging about on the internet and feel empowered by any decisions that they make. But what they don’t realise is that what they've been researching isn't just the information- it’s the information with many, many layers of interpretation (and therefore bias) added. For a choice to be truly informed, you need to go right back to the start, to those terrifying tables of numbers and statistics. That’s simply not realistic for the majority of people.
Far better, then, to learn how to decide on whose interpretation you’re going to rely on. Will it be those that take the media reports at face value, or who have an agenda or a product to sell you? Or will you go with those that have years of training in how to pull apart complicated data and disseminate it in understandable ways?
*I thought I’d give you a quick real life example here, but I thought it best to asterisk it because I've probably bored you enough already. I'm currently looking at a drug called edoxaban and its use in reducing the risk of stroke in patients with atrial fibrillation. It’s the newest in a series of novel oral anticoagulant drug- they’re supposedly like warfarin, but less faffy. So I find and look at the main trial, and spend days unpicking the stats. It looks like both strengths used in the trial are no worse than warfarin, and the higher dose might even be a little better. Great, right?
Well, that’s not quite the end of the story. Because it turns out- and this isn't reported in the trial at all, but instead is contained in the FDA’s briefing document- that in people with fully working kidneys, edoxaban is actually worse than plain old warfarin. In people whose kidney’s aren't quite at full capacity though, it might work better than warfarin. So the overall trial results are kind of skewed, and if we didn't dig deeper, we might have been giving a whole group of people a more expensive drug with worse outcomes than warfarin. Even the FDA findings are borderline- some of what they describe doesn't reach statistical significance.