What injecting adrenaline in fingers can teach us about medical dogma
Lessons about injecting stuff, medical myths and how medical knowledge spreads
Wait - inject what in what?
Let me be clear - this isn’t a complex metaphor. The first half of this text is simply a deep-dive into whether it’s good or bad to inject adrenaline into fingers.1 But this odd topic has several valuable lessons about how medical knowledge changes over time, and how to drive change in healthcare.2
So let’s start from the beginning: why would anyone inject adrenaline in someone’s finger?
A Stitch in Time, Pain Declines
People have been hurting themselves throughout human history.3 As long as there’s been emergency departments, there’s been lots of patients with cuts and wounds that need stitches. Stitching a wound was quite painful historically - at least until the early 1900s, when local anesthetics were invented.4
Prior to stitching a wound, doctors will numb the area using local anesthetic. If you have a wound on your finger or toe5, clinicians often perform a “digital nerve block” by injecting anesthetics at the base of the toe/finger/body part to numb it completely.6
When I practiced medicine I was taught (and for several years taught other physicians) a well known dogma: you should never ever add adrenaline (also called epinephrine) to the local anesthetic when anesthetizing fingers, ears or penises.
Why? Well, one of adrenaline’s many effects is that it contracts blood vessels. I learnt that injecting adrenaline in a finger would severely contract the blood vessels bringing blood to the finger. This would result in the finger losing circulation, tissue starting to die, and basically the patient losing their finger.
And to be honest, it made a lot of sense. Adrenaline does contract vessels. In the same way that the anesthetic shut off the nerve in the rest of the finger, adrenaline could reasonably also contract the vessels transporting blood to the finger.
Questioning dogma
However, one day I heard a doctor in a medical podcast state that the whole adrenaline-being-dangerous-in-fingers was a myth. I was dumbfounded. Could every single surgeon and emergency physician I’d ever met be mistaken? Could every guideline I had read about adrenaline in fingers be wrong?
It sounded extremely unlikely, but I decided to ask a simple question: is it actually dangerous to use adrenaline in a digital nerve block? And I did what any nerd with too much spare time would do - I dug deep into the literature.
What I found out
The belief that adrenaline was dangerous in fingers was unfounded. It seemed to have originated from a few old case reports where other factors were more likely to have caused the harm (e.g extremely high drug concentrations of drugs, or concurrent infections). The authors had mixed up correlation with causality and drawn the erroneous conclusion that the adrenaline had caused harm.
Many (i.e. 30+) studies had shown that using adrenaline in fingers is safe. Already in the 1970s, several studies (on over 250 000 patients) showed that clinicians had used adrenaline in fingers and toes without a simple complication.
However, it is very difficult to reverse a simple dogma once it becomes established.
Falsehood flies, and the Truth comes limping after it7
Some 45 years later after those studies, in 2015, I published my findings in a systematic review.8 Before publishing it I asked my nurse and physician colleagues what they believed - and everyone was still convinced that adrenaline should always be avoided.
What I realized
From a clinical perspective I realized that adrenaline could be slightly beneficial in anesthetizing fingers, as it can decrease the bleeding from a wound, and prolong the duration of the anesthetic. This is relevant: less blood makes suturing easier and longer duration means less pain for patients.
But that’s not all.
The bigger lesson
More importantly, I also ended up learning a lot about how medical knowledge updates itself.
Medical knowledge - a slow-moving network of constant change
I. Medical knowledge is sustained by social networks of beliefs. Medical research is an engine of progress - slowly but surely giving humanity an increasingly accurate understanding of how our bodies and treatments work.
But no clinician has time to read all new literature in their field9. Many clinicians will instead base their management of patients on international or national guidelines, or local clinical decision supports. And an equally important source of knowledge is what one’s peers do. The key concept here is trust. Despite not having read and critically assessed the original studies, physicians trust guidelines that are written by physicians they’ve never met. Despite not know exactly how their colleagues weigh conflicting different studies against each other, they trust them to create local clinical decision supports that make sense. This social network of trust is to some extent inevitable, because there is simply to much data and research to assess.
However, research in itself isn’t enough. It doesn’t always matter if there are 20 or 1000 studies showing that something is safe. If a senior physician says that adrenaline is dangerous, then people that trust that physician will be more likely to perpetuate that belief, which in turn will spread it further. What your colleagues believe is true matters as much (or more) than what books say. Books don’t raise their eyebrows when you tell everyone in a morning meeting that you injected adrenaline in someone’s finger. Books don’t judge.
This create several interlinked network of beliefs, with several key nodes that affect how quickly or slowly now knowledge gets accepted. This belief network has many consequences. For example, it may disincentivize clinicians from actively finding and critically assessing new contradictory knowledge, because even if they find something to be true in the literature, it might not lead to a change in practice. Why bother reading the literature, if success is defined by adhering to guidelines?
At the same time, this inertia also protects patients from individual clinicians misinterpreting the literature, and from abandoning the status quo too easily. In other words, the inertia in changing medical knowledge is not only a negative bug, but also a positive feature.
II. Medical myths persist long after science has disproved them. There are still myths and misconceptions out there. Science isn’t perfect, will err along the way, and diffuses slowly across the nodes of knowledge in healthcare systems. As networks of knowledge, spread out among clinicians and guidelines, take time to change, there will always be medical myths held alive by the medical profession at any given time.
I found this lesson valuable. I didn’t stop believing in guidelines or the existing knowledge based, but became aware that what today is seen as best practice (for certain areas of knowledge) might actually be outdated in the near future (or already today).10
III. Finally, I realized that medical dogma must be questioned for knowledge to progress. This case proves the point. If everyone would have accepted that adrenaline causes finger to fall off after the first article in the late 19th century, then no researchers would have ever questioned the dogma or performed the experiments which disproved the myth. It’s critical to support contrarians who challenge the consensus. This critical skepticism decreases the average life expectancy for medical myths, and can have large impact on the wellbeing of humankind11.
How to speed up the updating
Is a literature review enough to change a dogma? I don’t think so. But it can be a start. When I check the Swedish guidelines today, some 8 years and 50 citations later, I don’t see the same caveats about adrenaline, and maybe this study helped a teeny tiny bit.
But I was impatient at the time. So to disprove the myth, and to adress the social networks of knowledge in my clinical practice, I wanted to find a way to make the conclusion very clear. I ended up recording myself injecting my own finger with a local anesthetic with adrenaline, and showing the video at a conference. Yep, the picture higher up in the essay is me, injecting my finger with adrenaline, and giving birth to the new, and so far not very successful, expression “putting my finger where my mouth is”.
Many people I worked with didn’t have the time or energy to read scientific papers. But everyone had time to see a video of a foolish junior doctor inject adrenaline into his finger.
Many clinicians follow medical podcasts or blogs which discuss new research or how to manage certain conditions. The video might have contributed to those kind of knowledge nodes picking up my article, which in turn made the knowledge reach a greater amount of people much quicker.
A confession: Despite ploughing through the literature, I remember that I was nervous at the time. If I was wrong in my belief, and adrenaline actually was dangerous, it would be quite embarrassing to seek care. Would I forever be known as the 9-fingered doctor that didn’t understand guidelines? A cautionary tale for junior physicians who dare question guidelines?
Fortunately, the previous research was true, and I got to keep my finger.
On the topic of how knowledge is disseminated, I believe that large language models will play a central node in the process of translating research to clinical practice. So writing this I asked chatGPT:
The kicker? The source ChatGPT cited was my review from 2015.
Is a literature review enough to change a dogma?
I don’t think so. But it can be a start.
Emmetropes
Medical knowledge is sustained by social networks of knowledge. The consequent inertia protects patients from new inefficient or detrimental knowledge but also delays implementation of useful knowledge.
Medical myths often persist long after science and our best available knowledge has debunked them
Medical dogma must be questioned for knowledge to progress, and contrarian and seemingly odd hypotheses need to be supported. This has repeatedly been proven to be important for pivotal research.
Communication skills (especially nonconventional) may accelerate the diffusion of new knowledge through social nodes of knowledge
So don’t try this at home. Unless your home is a hospital. And in that case you’re probably an overworked doctor or a nurse. And if so, you definitely need to stop working so many shifts. I digress.
I use the word odd on purpose 🤓 Odd comes from Middle English odde (“odd, leftover after division into pairs”), which comes from Old Norse oddie (“odd, third or additional number; triangle”), which comes from oddr (“point of a weapon”), which comes from Proto-Germanic *uzdaz (“point”), which in turn comes from Proto-Indo-European *wes- (“to stick, prick, pierce, sting”). And yes, this footnote was purposefully tangential, to make the other ones seem better by comparison! Thanks Wiktionary!
Fun fact #1: In the 15th century the Wound Man was born - a type of table of contents which guided readers to how to treat various types of wounds. Read more in this article by Jack Hartnell.
Fun fact #2: Albert Niemann is credited to have been the first to isolate cocaine from coca leaves. One article states that he “noticed that when he placed the crystals on his tongue, it made his tongue feel numb”. Nowadays taking a little nibble from a newly synthesized substance isn’t really standard procedure. Medical research has really changed a lot in the past century. Summarized well be SMBC:
Fun fact #3: Also true for penile wounds. Fortunately it’s quite rare to have wounds on all those locations at the same time.
Why the base? Well, the nerves that transmit pain sensations from the top of your fingers run from the top of the fingertips, up your hand, to your arm and finally to your brain. So by blocking the nerves at the base of the finger you can block the nerve signals from travelling up to your arm/brain, thereby numbing the whole finger. The same principle applies for toes and 🍆s.
This is quite similar to Terry Pratchett’s “A lie can run round the world before the truth has got its boots on”. Turns out that (1) history, unsurprisingly and unfortunately, is filled with similar quotes and (2) this isn’t just a saying. Some empirical data supports that lies travel faster on digital channels such as Twitter / X.
That is unless they only read new articles in a very narrow field, but I believe that’s an exception rather than a rule.
Nuance is needed: there are some things where we have robust amounts of knowledge, where the knowledge is very unlikely to be disproven in the future. However, in general, new therapies often turn out to be less useful than what one originally believed, and so called medical reversal is more common than you would believe.
Helicobacter pylori is a bacteria that lives in stomachs and causes stomach ulcers and gastritis. The two researchers that discovered it (and eventually received the Nobel prize) where initially ridiculed for their hypothesis. The common view was that stomachs were to acidic for any bacteria to survive. To quote Wikipedia, one of the researchers stated "everyone was against me, but I knew I was right." An early paper describing the bacteria got rejected by the journal, being deemed among the worst 10% of the studies received by the journal that year. It’s never easy to challenge the existing paradigm.
It takes a generation. https://www.perplexity.ai/search/Dr-semmelweiz-how-_ezD4jZjQu24ssvgR8hwMA