Reviewers: Alan M
Batterham, Department of Sport and Exercise Science, University of Bath, Bath
BA2 7AY, UK; Caroline Burge, School of Medicine, University of Queensland,
Brisbane 4006, Australia; Keith
Davids, Department of Exercise and Sport Science, Manchester Metropolitan
University, Alsager, Cheshire ST7 2HL, UK;
John A Hawley, Division of Exercise Sciences, RMIT University, Victoria
3083, Australia. · Clinical vs
Statistical Significance. Likelihoods of clinical or practical benefit
and harm are superior to the p value that defines statistical significance. Reviewer's comment · Qualitative vs
Quantitative Research Designs. What's happened here vs what's happening
generally. Reviewer's comment · A Ban on
Caffeine? Coffee, tea, Coke, and chocolate are off the athlete's menu in
this unrealistic proposal. · Editorial:
Anti-Spamming Strategies. Guard
your email address to reduce junk messages. |
Clinical vs Statistical
Significance You have
spent many months and many thousands of dollars studying an effect. You have analyzed the data in a new manner
that takes into account clinical or practical significance. Here is the outcome of the analysis for the
average person in the population you studied:
an 80% chance the effect is clinically beneficial, a 15% chance that
it has only a clinically trivial effect, and a 5% chance that it is
clinically harmful. Should you publish the study? I think so.
The effect has a good chance of helping people. Indeed, it has 16 times more chance of
helping than of harming. If you think
that the 80% chance of helping is too low or that the 5% risk of harming is
too high (it will depend on the nature of the help and harm), you could get
more data before you publish. But if
there's no more money or time for the project, publish what you've got. Other researchers can do more work and
meta-analyze all the data to increase the disparity between the likelihoods
of help and harm. Will the
editor of a journal accept your data for publication? To make that decision, the editor will send
your article to one or more so-called peer reviewers, who are usually other
researchers active in your area. Most
reviewers base their decisions on statistical significance, which they know
has something to do with the effect being real. Statistical significance is defined by a
probability or p value. The smaller
the p value, the less likely the effect is just a fluke. When the p value is less than 0.05, you can
call the result statistically significant. Your article is much more likely
to be accepted when p=0.04 than when p=0.06. So what
is the p value for the above data?
Incredibly, it's 0.20. Check
for yourself on the spreadsheet
for confidence limits, which I have recently updated to include likelihoods
of clinically important and trivial effects for normally distributed outcome
statistics. To work out these
likelihoods, you need to include the smallest clinically important positive
and negative value of the effect you have been studying. In this example I chose ±1.0 units. I made the observed value of the effect 3.0
units–obviously clinically important as an observed value, but at
issue is the likelihood that the true value (the average value in the
population) is clinically important.
You will also have to include a number for degrees of freedom; I chose
38 (as in, for example, a randomized controlled trial with 20+20 subjects),
but the estimates of likelihood are insensitive to all but really small
degrees of freedom. Finally, of
course, you will need the p value, here 0.20.
You can get even more excitingly non-significant findings with smaller
p values. For example, changing p to
0.10 makes the likelihoods 87%, 12% and 2% for help, triviality, and harm
respectively. Yet even these data would be rejected by most reviewers and
editors, because p>0.05. Something
is clearly wrong somewhere. It's not
the spreadsheet; it's the requirement for p<0.05. Statistical significance does not do
justice to some clinically useful effects.
We should be reporting probabilities of clinical significance, not the
probability that defines statistical significance. Reviewers and editors would then make
better decisions. We still need to
report precision of estimation using likely (confidence) limits for the true
value of the effect, but 95% limits give an impression of too much uncertainty
for some clinically useful effects.
Even 90% might be too conservative in this respect, but there is
something appealing about limits that define the true value correctly 9 times
out of 10. Qualitative vs Quantitative
Research Designs This year
I gave a series of talks in several places on exercise and sport
research. I used simple PowerPoint
slides to act as a stimulus for informal discussion. Most of the material is
already at this site in one form or another, but I sometimes added new stuff
that might be useful for people giving or taking courses in our discipline.
To download the slides for the talk I gave on research design, click on this link. Other talks
will follow in future issues of Sportscience.
Most of
these slides represent a summary of an
article on quantitative research published here last year, but I have now
included an overview of qualitative research.
My neo-positivistic perspective will outrage radical post-modernists,
but it's probably a fair representation of the world that the moderates
inhabit. I used to
be critical of my story-teller colleagues, until I realized that qualitative
research in its purest form is the science of single case studies, rather
like the quest for truth in a court case.
You should employ a qualitative researcher anytime you want an answer
to a question of the form what's happened here. For example: why is our team underperforming,
why can't we swim as good as the Australians, how should we reorganize our
sports institute, and what can we learn from attitudes to sport in the
1930s? Qualitative researchers also
engage in action research: an intervention to change the world at the
single-case level. A suitably
qualified qualitative researcher might be able to make your team perform
better. On the
other hand, a quantitative researcher has the skills to find out what's
happening generally. For example,
what’s the effect of strength training on rowing economy, what predicts
individual responses to the effect of exercise on blood lipids, what are the
main causes of acute and chronic injuries in triathletes, and why do kids
choose to play particular sports? Quantitative researchers indulge in
observational (descriptive) studies to quantify associations between
variables, but they sort out cause and effect with experimental studies (interventions). Qualitative
researchers usually gather data by observing and interviewing, whereas quantitative
researchers usually test and measure.
But I don't think these methodologies should define the two paradigms.
What matters is the scope of your inferences: a conclusion about a single
case is qualitative research; a
generalization from two or more cases is quantitative. A Ban on Caffeine? For most
endurance athletes, a couple of 100-mg caffeine pills taken an hour or so
before a race will increase power output by a few percent. The International Olympic Committee
therefore lists caffeine as a banned substance, but the caffeine in such
everyday foods as coffee, tea, chocolate, and Coca-Cola has made enforcement
of the ban impractical. The IOC has therefore somewhat ambiguously made
caffeine also a restricted substance by setting an upper limit on the amount
athletes can have in a urine sample. A 70-kg athlete would probably exceed
the limit by drinking more than 5 cups of strong coffee or 5 liters of
Coke. Now
there's been a call to enforce the absolute ban (Graham, 2001). The reason?
Caffeine use is unethical, because caffeine is not a "traditional
nutrient", and because some athletes take caffeine "for the express
purpose of gaining an advantage". The sentiment is well-intentioned, but
the reasoning is illogical.
Traditional foods contain caffeine, so caffeine is a traditional
nutrient. Athletes train hard, eat
well, and buy expensive equipment to gain an advantage, but we aren't about
to ban those practices. Sure, there's
a sense in which caffeine is a drug, and there's a sense in which use of any
drug is unethical, even when there is no known health risk. But when the drug is part of normal food,
an absolute ban would be more than a great inconvenience: in my view it is
unethical to make athletes change customary dietary behaviors for the sake of
sport. It would
be appropriate to ban deliberate use of pure caffeine, but it’s unlikely
anyone can develop a urine or blood test that would distinguish between the
synthetic caffeine in capsules and the natural caffeine in the normal diet.
The caffeine in drinks containing extracts of guarana berries would also be a
problem. These drinks probably work
better than coffee, which contains something that partly counteracts the
ergogenic effect of caffeine. Guarana
drinks are nevertheless natural, if not traditional, fare that should not be
banned. Graham TE (2001). Caffeine and
exercise. Sports Medicine 31, 785-807 Editorial: Anti-Spamming
Strategies Spam is
unsolicited junk email inviting you to part with your money in various
annoying and often offensive ways. Spammers now get email addresses off Web
pages using automated search engines. To offer some interim protection to
authors of articles at this site, I have now replaced the "@" sign
in all email addresses with something that should put the spammers' search
engines off the scent. When you click
on an email link, you will have to change the address manually to make it
work. At the moment I have only edited
the html pages in this manner; doc and pdf files are unchanged. I have also uploaded a large number of
false email addresses to a hidden html page, to give the search engines
something to find. I have
visited several anti-spamming sites to see what others are doing. I could find no convincing proactive strategy,
but all offer advice for avoiding spam
Here's an edited version of spamrecycle.com's
contribution… • Never respond to spam. • Never buy anything advertised in
spam. • Don’t put your address on any
website. • Use a second free email address in
newsgroups, and change it frequently. • Don’t give your email address
without knowing how it will be used. • Use a spam filter or other
anti-spam email software. The last
point sounds good, but it may be impractical to keep updating filters. All
sites I visited were short on specifics of how to do it. And you still get
the spam, even if you don't see it. You
should also make sure that the address list of any mailing list you are on is
not publicly accessible. People on the
Sportscience mailing list
and people mailing to the list are safe in this respect. Here are
a few more anti-spam sites, courtesy of Caroline Burge: http://www.arachnoid.com/lutusp/antispam.html http://www.sendmail.org/antispam.html http://www.elsop.com/wrc/nospam.htm http://tucows.myriad.net/spam95.html
|