Entries Tagged as randomized controlled trial
New AAN Guideline Evaluates Bell Palsy Treatments
Read the new practice guideline
Oral steroids can improve the likelihood of full facial recovery in people with new-onset Bell palsy, according to “Evidence-based Guideline Update: Steroids and Antivirals for Bell Palsy,” that was published electronically ahead of print on November 7, 2012, and appears in the November 27, 2012, issue of Neurology®. The efficacy of oral steroids is supported by well-designed, high-quality studies.
Efficacy of Antiviral Therapy Questionable
Antiviral therapy alone has not been shown in well-designed studies to increase the likelihood of full facial recovery. Physicians might offer antiviral drugs as an addition to oral steroid treatment, but they should inform their patients that a benefit from this drug combination has not been strongly demonstrated by well-designed studies. Patients also should be informed that if there is an added benefit of combination therapy even in severe cases, it will be marginal at best.
Read the guideline and access PDF summaries for clinicians and patients, a slide presentation, and a clinical example. For more information, contact Julie Cox at jcox@aan.com or (612) 928-6069.
Tags:
American Academy of Neurology · evidence-based medicine · guideline · neurology · randomized controlled trial · systematic review
Usefulness of Test Depends on Clinician’s Judgment of Probability of sCJD Before Testing
Testing for 14-3-3 protein in spinal fluid may support the clinical diagnosis and other diagnostic tests used to diagnose sporadic Creutzfeldt-Jakob disease (sCJD) in patients who present with rapidly progressive dementia and are suspected of having sCJD. This is the primary finding in “Diagnostic Accuracy of CSF 14-3-3 Protein in Sporadic Creutzfeldt-Jakob Disease,” a new guideline from the AAN that was published electronically ahead of print on September 19, 2012, and appears in the October 2, 2012, print edition of Neurology®.
While the test may help when used in cases where doctors suspect sCJD may be present, the test is not accurate enough either to diagnose the disease or to rule out the disease with absolute certainty.
The usefulness of the 14-3-3 test will largely depend on a clinician’s judgment of the pretest probability of sCJD for a given patient. Such judgments will reasonably consider the rarity of sCJD (incidence 1 per million per year), the patient’s clinical presentation, and the results of already obtained ancillary tests such as brain MRI. However, how the test should be used in conjunction with EEG and MRI findings suggestive of sCJD needs further investigation. The authors contend that only physicians experienced in diagnosing dementia should determine whether the 14-3-3 protein test is needed and how results should be understood.
Read the guideline and access PDF summaries for clinicians and patients, a slide presentation, and a clinical example. For more information, contact Julie Cox at jcox@aan.com or (612) 928-6069.
Tags:
American Academy of Neurology · evidence-based medicine · guideline · neurology · randomized controlled trial · systematic review
Realizing that there is more than evidence
supporting clinical decision making helps explain why randomized controlled
trials (RCTs) are not always necessary for making all decisions. Many decisions
can be made based solely on the base pillar—principles. These are the axioms of
medicine that have been already established. Principles are the facts we
know—the anatomy of the medulla, for example. They are often quite useful and
sometimes support a clinical decision in and of themselves. Thus, when we see a
patient with ipsilateral miosis, ptosis, decreased facial thermal sensation,
and contralateral decreased body thermal sensation, we know with a high degree
of certainty that the patient has a lesion in the lateral medulla—even if an MRI
doesn’t show this. We know things about anatomy, physiology, pathology, and
much more—principles.
Can we make a decision regarding the efficacy
of parachutes on the sole basis of principles? Of course we can. We know
jumping from a great height without a parachute is almost always lethal. It is
an established fact that jumping out of an airplane with a parachute increases
your chance of survival by a lot. An RCT is unnecessary and foolish in this
situation because we already know the answer.
It is, of course, obvious in the parachute
and the lateral medullary syndrome scenarios that principles are sufficient to
support our decisions. However, it is not always that clear. Would you, for
example, think it necessary to perform an RCT in patients with acute
appendicitis which compares the efficacy of surgical appendectomy to that of medical
therapy (i.e., antibiotics)? Many would say no—that we already know on the basis
of principles that surgical appendectomy is the preferred treatment.
When trying to determine whether a decision
can be supported completely by established principles, it is helpful to ask two
questions. First, is there a plausible alternative inference from first
principles? In the appendectomy scenario, the inference that removing the
infected appendix will make the patient better makes sense. Does it not also
make sense that treating the infected appendix with antibiotics will work? It
is difficult to develop a compelling principle-based argument that appendectomy
would automatically be better than antibiotics. By contrast, I can come up with
no alternative principle-based arguments favoring the choice not to use a
parachute. I doubt anyone could.
The second question to ask to determine whether
principles are enough is, do reasonable people disagree? When asked, most
clinicians I know would not favor an RCT to answer the appendicitis
question—but some think it a reasonable idea. There is genuine disagreement. An
alternative way of determining whether reasonable people disagree is to ask whether
an institutional review board (IRB) would approve an RCT to answer the
question. If, in fact, you find that there is a trial under way, you have your
answer—an IRB somewhere approved that study. Reasonable people disagree about
the best course of action in many situations. In these circumstances, it is a
safe bet that the answer cannot be inferred from principles alone as it can
with parachutes. I can find no reasonable person who would volunteer for the
RCT regarding parachutes.
So what about antibiotics for appendicitis?
An RCT comparing appendectomy to antibiotics actually was conducted.1
Patients randomized to surgery or antibiotics both got better. Patients
undergoing surgery had surgical complications that patients treated with
antibiotics did not have. Patients receiving medical therapy were more likely
to have a recurrent bout of appendicitis than patients undergoing surgical appendectomy.
Which is better? It is not at all obvious from the evidence.
What about judgment—the other non-evidence-based
pillar of clinical decision making? This is what we rely on when principles and
evidence are not enough. Judgment involves making educated guesses. It is a
large part of the art of medicine. When we do not know the answer, we have to
make our best guess as to what is best for our patients. Guessing is necessary
in many situations in medicine—some would say in most situations.
Knowing when we are guessing is a critical
skill. It is not always easy to distinguish our seemingly clever,
principle-based inferences from our informed opinions. I know some colleagues
who genuinely believe that their opinions should be considered established
principles of medicine. I suspect many readers of this entry know such
colleagues too. The presence of alternative clever, principle-based inferences
and genuine disagreements with other colleagues should tell them that they are
guessing.
What of evidence? This is the middle, or
central, pillar. It separates principles from judgment. It will be the major
topic of many future entries in this blog. We will not consider it further for
now except to say that, when discussing nuances of evidence such as bias and
random error, it is easy to forget that clinical decision making is not just
about evidence. We will strive not to become “radical protagonists” of
“evidence-only” medicine. We will remember our parachutes.
1Eriksson S, Granström L. Randomized controlled trial of appendicectomy
versus antibiotic therapy for acute appendicitis. British Journal of
Surgery 1995;82:166–169.
Tags:
evidence-based medicine · guideline · neurology · randomized controlled trial · systematic review

You
may be familiar with an article published in 2003 in the British Medical Journal where the authors conducted a systematic
review searching for randomized controlled trials (RCTs) that tested the
efficacy of parachute use in preventing death and major trauma related to
gravitational challenge.1 Unsurprisingly, they failed to find a
single RCT demonstrating the effectiveness of parachutes. On the basis of this
absence of evidence they made the following tongue-in-cheek recommendation:
We think that everyone might benefit if the most radical
protagonists of evidence based medicine organised and participated in a double
blind, randomised, placebo controlled, crossover trial of the parachute.
Critics
of evidence-based medicine (EBM) frequently reference this article. They
believe it highlights the absurdity of always requiring an RCT in order to
conclude that any intervention works—they are right. Sometimes that requirement
is absurd.
When
is an RCT unnecessary? The answer to this question will dispel a pervasive myth
regarding EBM. A myth espoused by some of its most “radical protagonists.” A
myth reinforced by the parachute article. That myth is that “evidence-based
medicine” is actually “evidence-only medicine.”
If
EBM is not “evidence-only” what else does it include? Two other things:
principles and judgment. These two elements, taken together with the evidence,
form three pillars that support clinical decision making. Each pillar stacked
one atop the other—their relative importance dependent on the clinical
circumstances.
The three
pillars are more than just a useful metaphor. The concepts are fundamental.
They map directly to identical concepts used in all disciplines employing the
scientific method. Principles map to the established theoretical framework of
what we know. Evidence maps to critical empirical inquiry. Judgment maps to
hypothesis generation. Indeed, it is no exaggeration to state that EBM is the explicit
application of the scientific method to clinical decision making.
1Smith
GCS, Pell JP. Parachute use to prevent
death and major trauma related to gravitational challenge: systematic review of
randomised controlled trials. BMJ 2003;327:1459–1461.
Tags:
American Academy of Neurology · evidence-based medicine · neurology · randomized controlled trial