A Cycling & bikes forum. CycleBanter.com

Go Back   Home » CycleBanter.com forum » rec.bicycles » Social Issues
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Helmet propaganda debunked



 
 
Thread Tools Display Modes
  #211  
Old May 28th 05, 03:48 PM
Just zis Guy, you know?
external usenet poster
 
Posts: n/a
Default

On Sat, 28 May 2005 01:14:39 GMT, (Bill Z.)
wrote in message :

You said "This "confounding factor" red herring you guys are now
touting is just that." They said: it exists.


"They" did not say it existed in what you posted.


Actually they did. Especially the quote from Spaite, which regards it
as a most significant finding.

All they really
said is that you have to compare similar populations or similar
accidents (which is kind of basic.)


How does that square with tables 3 and 4 of the 1989 Seattle study, to
name but one?

It is not like there is some
subtle effect where wearing a helmet interacts with your brain in
some way that changes the types of crashes you get into (that could
be a "confounding factor" if it actually happened, but it doesn't.)


So you say. The helmet researchers say otherwise.

usual "Guy" verbage snipped - he's just ranting.
Oh, and I'll dump your other post as well - you are just babbling
again and making stuff up to try to keep the "discussion" going.


Translation "Tra la la la la, I'm not listening".

Nice use of irony, though, accusing me of trying to keep the
discussion going by posting citations and discussions of evidence when
you spent literally weeks defending your rubbishing of a paper you
acknowledged you hadn't even read properly for reasons of personal
prejudice.

But I bet nobody was fooled.

Guy
--
May contain traces of irony. Contents liable to settle after posting.
http://www.chapmancentral.co.uk

85% of helmet statistics are made up, 69% of them at CHS, Puget Sound
Ads
  #213  
Old May 29th 05, 02:41 PM
Just zis Guy, you know?
external usenet poster
 
Posts: n/a
Default

On Sat, 28 May 2005 23:54:30 GMT, (Bill Z.)
wrote in message :

You said "This "confounding factor" red herring you guys are now
touting is just that." They said: it exists.


"They" did not say it existed in what you posted.


Actually they did. Especially the quote from Spaite, which regards it
as a most significant finding.


They did not say it existed in what you posted. Do you understand
"what you posted" means? You know, the quote you actually provided?
Are you now saying you quote one thing and then proceed to talk about
something else?


Yes, Bill, I understand what "what I posted" means. Moreover, I
actually understand what I posted, which evidently you do not.

"We conclude that helmet nonuse is strongly associated with severe
injuries in this study population. This is true even when the
patients without major head injuries are analyzed as a group" - A
prospective analysis of injury severity among helmeted and non
helmeted bicyclists involved in collisions with motor vehicles, Spaite
DW, Murphy M, Criss EA, Valenzuela TD, Meislin HW, 1991. Journal of
Trauma: 1991 Nov;31(11):1510-6

So: there is a difference between the accident involvement profiles of
the two groups - in other words the confounding "red herring" exists.
Have you read the paper in question? It is quite clear in context.

"We cannot completely rule out the possibility that more cautious
cyclists may have chosen to wear helmets and also had less severe
accidents" - A case control study of the effectiveness of bicycle
safety helmets, Thompson RS, Rivara FP, Thompson DC. 1989. New England
Journal of Medicine: 1989 v320 n21 p1361-7

So: a group with a significant pro-helmet bias nonetheless acknowledge
that the confounding "red herring" exists. Have you read the paper in
question? It is very evident in their discussion.

"[...] studies exclude bicyclists who had a head impact but no injury
and include bicyclists who did not have a head impact but who suffered
a non-head injury. Hence, these studies can either over-estimate or
under-estimate the protective effect of helmets. [those which adjust
for factors such as age, sex, riding conditions, speed, road surface
and collision with motor vehicles may partly compensate for the
inadequate design, [...] In this way they are more desirable than
studies of Design 3 which make no effort to adjust for these
potentially confounding factors." - Bicycle helmets - a review of
their effectiveness: a critical review of the literature, Towner E,
Dowswell T, Burkes M, Dickinson H, Towner J, Hayes M. 2002. Department
for Transport: Road Safety Research Report 30

So: a reiew of various studies found that it was essential for the
confounding "red herring" to be adequately controlled for, and notes
that some studies fail even to try. Have you read the paper in
question? It is even clearer when read in context.

It seems to me that you are simply in denial. The existence of the
confounding factor is documented in every recent paper on the subject
that I can think of, and many of the older ones.

All they really said is that you have to compare similar
populations or similar accidents (which is kind of basic.)


How does that square with tables 3 and 4 of the 1989 Seattle study, to
name but one?


Now you really are babbling (unless you really are incredibly dumb, I
suspect you are either trolling or simply replying without reading
what I actually say.)


No, Bill, I read what you said. You said (rightly) that it is
important to compare similar populations. How does that square, in
your opinion, with the data in tables 3 and 4 of the 1989 Seattle
study?

It is not like there is some
subtle effect where wearing a helmet interacts with your brain in
some way that changes the types of crashes you get into (that could
be a "confounding factor" if it actually happened, but it doesn't.)


So you say. The helmet researchers say otherwise.


What you say they say and what they really say are two different things.
I've seen you misquote or misrepresent things before.


So you say. Perhaps you could cite some studies which you say deny
the existence of confounding? I'm having trouble thinking of any
which explicitly deny it, and not many which ignore its existence, but
I've only got a few hundred of them in my library.

Guy
--
May contain traces of irony. Contents liable to settle after posting.
http://www.chapmancentral.co.uk

85% of helmet statistics are made up, 69% of them at CHS, Puget Sound
  #214  
Old May 30th 05, 08:14 PM
Bill Z.
external usenet poster
 
Posts: n/a
Default

"Just zis Guy, you know?" writes:

On Sat, 28 May 2005 23:54:30 GMT, (Bill Z.)
wrote in message :

You said "This "confounding factor" red herring you guys are now
touting is just that." They said: it exists.


"They" did not say it existed in what you posted.


Actually they did. Especially the quote from Spaite, which regards it
as a most significant finding.


They did not say it existed in what you posted. Do you understand
"what you posted" means? You know, the quote you actually provided?
Are you now saying you quote one thing and then proceed to talk about
something else?


Yes, Bill, I understand what "what I posted" means. Moreover, I
actually understand what I posted, which evidently you do not.


You may understand the sentence "what I posted" means, but what you said
in "message 1" had nothing to do with your claim in "message 2" and
further attempts to pretend otherwise are silly.

"We conclude that helmet nonuse is strongly associated with severe
injuries in this study population. This is true even when the
patients without major head injuries are analyzed as a group" - A
prospective analysis of injury severity among helmeted and non
helmeted bicyclists involved in collisions with motor vehicles, Spaite
DW, Murphy M, Criss EA, Valenzuela TD, Meislin HW, 1991. Journal of
Trauma: 1991 Nov;31(11):1510-6

So: there is a difference between the accident involvement profiles of
the two groups - in other words the confounding "red herring" exists.
Have you read the paper in question? It is quite clear in context.


Yawn. Keeping all other variables constant is "how to to research 101".
It is quite different from the "confounding factor" red herring you
brought up by citing medical research on heart disease, where there
are complex interactions between various proteins, enzymes, etc.

BTW, previously you anti-helmet people argued the exact opposite -
that wearing helmets made people take higher risks (look up
threads on "risk compensation".)

Also, the date on the study you quote is 1991 - with the accidents
being studies occuring earlier. Gee. You pick a time period where
helmet was predominant mostly amoung "serious" cyclists and not more
casual cyclists and you find that the casual cyclists make more
serious mistakes and have worse crashes as a result? And you think
this is surprising?

Give us a break, Guy ... you are ranting about trivial nonsense
and making a fool of yourself.


--
My real name backwards: nemuaZ lliB
  #215  
Old May 30th 05, 08:45 PM
Just zis Guy, you know?
external usenet poster
 
Posts: n/a
Default

On Mon, 30 May 2005 19:14:16 GMT, (Bill Z.)
wrote in message :

Yes, Bill, I understand what "what I posted" means. Moreover, I
actually understand what I posted, which evidently you do not.


You may understand the sentence "what I posted" means, but what you said
in "message 1" had nothing to do with your claim in "message 2" and
further attempts to pretend otherwise are silly.


So you say. And yet you advance no credible reason to justify your
refusal to believe what is written in the reports themselves, to whit:
that the groups of helmeted and unhelmeted riders differed in more
ways than just their helmet wearing rates.

"We conclude that helmet nonuse is strongly associated with severe
injuries in this study population. This is true even when the
patients without major head injuries are analyzed as a group" - A
prospective analysis of injury severity among helmeted and non
helmeted bicyclists involved in collisions with motor vehicles, Spaite
DW, Murphy M, Criss EA, Valenzuela TD, Meislin HW, 1991. Journal of
Trauma: 1991 Nov;31(11):1510-6


So: there is a difference between the accident involvement profiles of
the two groups - in other words the confounding "red herring" exists.
Have you read the paper in question? It is quite clear in context.


Yawn. Keeping all other variables constant is "how to to research 101".
It is quite different from the "confounding factor" red herring you
brought up by citing medical research on heart disease, where there
are complex interactions between various proteins, enzymes, etc.


Ah, so you /don't/ understand. Well why didn't you say? Oh but of
course you don't: you flushed the posts where I explained it. So, to
repeat the explanation I posted in terms you should be able to
understand:

Two interventions exist, I1 and I2, which are supposed to affect two
health effects H1 and H2.

Case-control studies S1 predict that intervention I1 will lead to
reductions in adverse health effect H1 in the ratio R1.

Case-control studies S2 predict that intervention I2 will lead to
reductions in adverse health effect H2 in the ratio R2.

Which is which?

Time-series data T1 show that the rates of H1 do not track changes in
intervention I1.

Time-series data T2 show that the rates of H2 do not track changes in
intervention I2.

Which is which?

In analysing the difference in benefit between T1 and S1, some
researchers noted that voluntary uptake of I1 was strongly
socio-economically stratified, and that those selecting for I1 were
inherently less likely to exhibit H1 in the first place.

In analysing the difference in benefit between T2 and S2, some
researchers noted that voluntary uptake of I2 was strongly
socio-economically stratified, and that those selecting for I2 were
inherently less likely to exhibit H2 in the first place.

Which is which? One of the interventions is combined HRT, the other
cycle helmets. One of the health effects is CHD, the other head
injuries. The two ratios R both vary considerably between studies.
Using your skill and judgment, define which set of statements refers
to HRT/CHD and which to helmets/HI. You have a 50% chance of being
right.

Oh, wait, one of them might be multivitamins reducing cancer. Or was
it cannabis increasing schizophrenia? No, wait, maybe it was early
use of antibiotics causing asthma in later life. Or was it MMR and
autism? So many studies, all so similar.

Finally, I1 (or is it I2?) is amenable to randomised controlled
trials. RCT show that the ratio R1 (or was it R2?) predicted by case
control studies is wrong as to both magnitude and direction. Lessons
are drawn as to the advisability of believing case-control studies
when they are contradicted by the other evidence.

BTW, previously you anti-helmet people argued the exact opposite -
that wearing helmets made people take higher risks (look up
threads on "risk compensation".)


Anti-helmet? Who? Where?

And yes, the risk compensation effect is also documented. No wonder
the small-scale studies have such absurd confidence intervals and fail
to agree on the scale of the supposed benefit!

Also, the date on the study you quote is 1991 - with the accidents
being studies occuring earlier. Gee. You pick a time period where
helmet was predominant mostly amoung "serious" cyclists and not more
casual cyclists and you find that the casual cyclists make more
serious mistakes and have worse crashes as a result? And you think
this is surprising?


And the cited text from Towner et. al. is dated 2002. But Bill - do
you not understand that you have just conceded the point? Yes, the
groups of helmeted and unhelmeted cyclists are different. That's what
I said - that is what confounding means. For example, those from
lower socio-economic groups are more likely to suffer head injury (and
road traffic injury) than those from higher groups, independent of
cycling. They are also less likely to wear helmets. Both injury and
helmet use are strongly socio-economically stratified - and this is a
confounding factor. So is the fact that risk-averse cyclists are more
likely to wear helmets voluntarily. So is the fact that putting a
helmet on a cyclists changes their risk-taking by some unmeasurable
amount. These are all confounding factors!

You say that the studies compare serious cyclists with non-serious -
precisely! And to bring us back to the subject of the thread, the
above is obvious from tables 3 and 4 in the 1989 Seattle study, but
the original authors still cited that study *unmodified* in their 2002
Cochrane review (in which 70% of the cases reviewed were their own
work - so much for independence!). If helmets prevent 85% of head
injuries, as they still claimed in 2002, they also prevent 72% of
broken legs, along with black skin, low income, riding on roads,
riding alone rather than with families, female gender and being hit by
cars. Or perhaps, as Frank and I have said all along, there are
confounding factors in play.

Confounding - self-selection bias to give it another common name - is
inherent in observational case-control studies. Which is why Pettiti
recommends the following approach to such data:

o Do not turn a blind eye to contradictory evidence (such as the lack
of response in head injury rates when helmet wearing increased by
40-50 percentage points following legislation)
o Do not be seduced by mechanism. Even where a plausible mechanism
exists, do not assume that we know everything about that mechanism
and how it might interact with other factors.
o Suspend belief. Of researchers defending observational studies,
Pettiti says this: "Belief caused them to be unstrenuous in
considering confounding as an explanation for the studies".
o Maintain scepticism.

See that last point? That's Frank and me, that is: we're sceptics.
To a true believer a sceptic looks like an atheist, but that's the
true believer's problem not the sceptics.

Give us a break, Guy ... you are ranting about trivial nonsense
and making a fool of yourself.


So you say. Usual challenge: which text is sufficiently immoderate to
be characterised as a rant? I anticipate the usual null response.

It's funny, though, that you accuse me of making a fool of myself by
posting quotes and citations from source data, when you are merely
arm-waving and (it turns out) admitting the point while pretending to
rebut it!

Guy
--
May contain traces of irony. Contents liable to settle after posting.
http://www.chapmancentral.co.uk

85% of helmet statistics are made up, 69% of them at CHS, Puget Sound
  #216  
Old June 1st 05, 06:39 AM
Bill Z.
external usenet poster
 
Posts: n/a
Default

"Just zis Guy, you know?" writes:

Before replying, I'll note that Guy is now trying to make a big deal
about his "counfounding factors" (when it is really just a poor choice
of samples) and specifically complaining about differences in helmeted
versus non-helmeted cyclists. So, I'm going to give Guy a chance to
prove his integrity. In a previous incarnation of this dicussion Tom
Kunnich heaped accolades on a study by Paul Scuffham calling it a
"watermark study" (see Message ID ).
This study did not track which cyclists who were in accidents wore
helmets and which did not (the data was simply not available from the
sources Scuffham used).

When such shortcomings were pointed out, the anti-helmet group went
into overdrive denying it and calling me the usual assortment of
names.

Well, Guy, now is your chance. By our own argument, you should be
willing to state that the praise heaped on this study was not
in the least bit warranted, and that Tom Kunich, Frank Krygowski,
and company were completely wrong. Will you do that? Or are you
just arguing to push an agenda? Inquiring minds want to know.

On Mon, 30 May 2005 19:14:16 GMT, (Bill Z.)
wrote in message :

Yes, Bill, I understand what "what I posted" means. Moreover, I
actually understand what I posted, which evidently you do not.


You may understand the sentence "what I posted" means, but what you said
in "message 1" had nothing to do with your claim in "message 2" and
further attempts to pretend otherwise are silly.


So you say. And yet you advance no credible reason to justify your
refusal to believe what is written in the reports themselves, to whit:
that the groups of helmeted and unhelmeted riders differed in more
ways than just their helmet wearing rates.


Sigh. As I pointed out, to study helmet effectiveness, you *have*
to pick two groups that differ only in their helmet-wearing rates.
This is "How To Do An Experiment 101"."

Yawn. Keeping all other variables constant is "how to to research 101".
It is quite different from the "confounding factor" red herring you
brought up by citing medical research on heart disease, where there
are complex interactions between various proteins, enzymes, etc.


Ah, so you /don't/ understand. Well why didn't you say? Oh but of
course you don't: you flushed the posts where I explained it. So, to
repeat the explanation I posted in terms you should be able to
understand:

Two interventions exist, I1 and I2, which are supposed to affect two
health effects H1 and H2.

snip

Irrelevant. The term "confounding factors" (I even gave you a URL
defining the term for you) applies to systematic errors.


BTW, previously you anti-helmet people argued the exact opposite -
that wearing helmets made people take higher risks (look up
threads on "risk compensation".)


Anti-helmet? Who? Where?


As I said, look up the thread on risk compensation in the 1990s.

And yes, the risk compensation effect is also documented. No wonder
the small-scale studies have such absurd confidence intervals and fail
to agree on the scale of the supposed benefit!


Well, as I said, you were just arguing the opposite.

Also, the date on the study you quote is 1991 - with the accidents
being studies occuring earlier. Gee. You pick a time period where
helmet was predominant mostly amoung "serious" cyclists and not more
casual cyclists and you find that the casual cyclists make more
serious mistakes and have worse crashes as a result? And you think
this is surprising?


And the cited text from Towner et. al. is dated 2002. But Bill - do
you not understand that you have just conceded the point?


No I didn't conced the point. They simply failed to pick a representative
sample and found out after they had done a lot of work, and figured
they'd rather try to publish than perish. At least, it served as a
warning about a factor you need to control for.

Yes, the
groups of helmeted and unhelmeted cyclists are different. That's what
I said - that is what confounding means.


No it doesn't mean that. There is no causal affect that makes
non-helmeted riders more accident prone than helmeted riders. If
(before helmets became popular) you took a sufficiently
large random sample of equally skilled cyclists, and gave a
reasonable fraction of them helmets, your "difference" would go
away.


For example, those from
lower socio-economic groups are more likely to suffer head injury (and
road traffic injury) than those from higher groups, independent of
cycling. They are also less likely to wear helmets.


So what? You just don't compare unskilled cyclists riding on quiet
residential streets wearing helmets to simlarly unskilled cyclists
riding on busy urban streets in low income areas where unlicensed
drivers and cars with ill maintained brakes are far more common.


You say that the studies compare serious cyclists with non-serious -
precisely!


I said those studies are inherently unreliable.

Confounding - self-selection bias to give it another common name - is
inherent in observational case-control studies. Which is why Pettiti
recommends the following approach to such data:


I posted a definition for you and you are ignoring it. Here it is
again - http://www.sysurvey.com/tips/statistics/confounding.htm.
Or to quote,

"In a well designed psychology experiment an investigator will
randomly assign subjects to two or more groups and except for
differences in the experimental procedure applied to each
group, the groups will be treated exactly alike. Under these
circumstances any differences between the groups that are
statistically significant are attributed to differences in the
treatment conditions. This, of course assumes that except for
the various treatment conditions the groups were, in fact,
treated exactly alike. Unfortunately, however, It is always
possible that despite an experimenters best intentions there
was some unsuspected systematic differences in the way the
groups were treated in addition to the intended treatment
conditions. Statisticians describe systematic differences of
this sort as confounding factors or confounding variables."

Note the first sentence. If you are not doing that, you don't have
a well-designed experiment in the first place. You should expect
bogus results, and those bogus results are not the result of
"confounding factors" but a poorly designed experiment.


rest of Guys's rant snipped

--
My real name backwards: nemuaZ lliB
  #217  
Old June 1st 05, 11:20 AM
Just zis Guy, you know?
external usenet poster
 
Posts: n/a
Default

On Wed, 01 Jun 2005 05:39:39 GMT, (Bill Z.)
wrote:

Before replying, I'll note that Guy is now trying to make a big deal
about his "counfounding factors" (when it is really just a poor choice
of samples) and specifically complaining about differences in helmeted
versus non-helmeted cyclists.


Who would have thought it? When discussing confounding factors, I
make a big deal about confounding factors! Amazing.

No, Bill, it's not "poor choice of sample", it's confounding. The
profiles of the helmeted and unhelmeted riders in the studies are
different: both helmet use and injury are socio-economically
stratified (there is plenty of evidence for that), so the difference
is not down to poor choice of samples, it is inherent in the study
populations.

There is poor choice of samples, too, of course. For example, the
1989 Seattle study (still the most widely-quoted study) compared a
"case" group which was more likely to be poor, male, black or
Hispanic, riding unaccompanied on city streets, with a negligible
helmet wearing rate; with a "control" group which was predominantly
white, mostly female, middle-class, riding on off-road bike trails,
with a helmet wearing rate in the 20s percent - and attributed all the
difference in injury profile to helmet use.

You can play all sorts of games with the data from this study - you
can show from the same data that the helmeted riders were many times
more likely to crash, and that the protective effect against broken
legs was about the same as for head injuries, or you can substitute
the helmet wearing rate measured by co-author Rivara in
contemporaneous street counts and see the benefit vanish. Whatever,
it's all evidence of confounding factors.

So, I'm going to give Guy a chance to
prove his integrity. In a previous incarnation of this dicussion Tom
Kunnich heaped accolades on a study by Paul Scuffham calling it a
"watermark study" (see Message ID ).
This study did not track which cyclists who were in accidents wore
helmets and which did not (the data was simply not available from the
sources Scuffham used).


Keep beating, Bill, there's still a vestige of the bloody smear where
that dead horse used to be!

When such shortcomings were pointed out, the anti-helmet group went
into overdrive denying it and calling me the usual assortment of
names.


What anti-helmet people? Name them.

Well, Guy, now is your chance. By our own argument, you should be
willing to state that the praise heaped on this study was not
in the least bit warranted, and that Tom Kunich, Frank Krygowski,
and company were completely wrong. Will you do that? Or are you
just arguing to push an agenda? Inquiring minds want to know.


Hmmmm. Tricky.

Scuffham in 1997: "Results revealed that the increased helmet wearing
percentages has had little association with serious head injuries to
cyclists as a percentage of all serious injuries to cyclists for all
three groups, with no apparent difference between bicycle only and all
cycle crashes."

Scuffham in 2000: "We conclude that the helmet law has been an
effective road safety intervention that has lead to a 19% (90% CI:
14%, 23%) reduction in head injury to cyclists over its first three
years."

How to account for the difference? Aha! Figure 3 in the 2000 study
shows the problem. Looking at figure 3 we see a steadily (and
uniformly) declining trend over time running from 1988 to the most
current data used in the study, 1997. There is a lot of noise due to
the very small sample sizes used, but the trend is very clear and can
be accounted for using statistical regression techniques. What
happens to the 19% figure when the regression techniques are applied?
It becomes zero within the limits of sampling error. The 19% figure
is clearly the result of poor selection of data points.

Mystery solved.

So, Bill, having read both studies (and two other studies with
Scuffham as co-author discussing the same subject), I fully understand
what is going on, and I am not going to play your silly game.

This study is, of course, independent of the discussion of confounding
factors inherent in observational case-control studies, as documented
in the International Journal of Epidemiology. But it does make a
striking comment relevant to that discussion: it notes that the
predictions from such studies are enormously higher than predicted by
the case-control studies. Guess which figure it uses for the
predicted reduction of brain injury? 88%! The 1989 Seattle study, by
then widely criticised in press for its unrepresentative "control"
group and failure to adequately account for confounding. Well, well.

you advance no credible reason to justify your
refusal to believe what is written in the reports themselves, to whit:
that the groups of helmeted and unhelmeted riders differed in more
ways than just their helmet wearing rates.


Sigh. As I pointed out, to study helmet effectiveness, you *have*
to pick two groups that differ only in their helmet-wearing rates.
This is "How To Do An Experiment 101"."


And is impossible to achieve in practice. Why do you have such
difficulty accepting the study authors' own statement that there are
differences between the helmeted and unhelmeted populations?

Two interventions exist, I1 and I2, which are supposed to affect two
health effects H1 and H2.


Irrelevant. The term "confounding factors" (I even gave you a URL
defining the term for you) applies to systematic errors.


That is *a* definition, not *the* definition. As Pettiti says of
those who believed another set of case-control studies with similar
socio-economic differences between case and control groups: "belief
caused them to be unstrenuous in considering confounding as an
explanation for the studies".

You are playing word games, Bill, and badly at that.

BTW, previously you anti-helmet people argued the exact opposite -
that wearing helmets made people take higher risks (look up
threads on "risk compensation".)


Anti-helmet? Who? Where?


As I said, look up the thread on risk compensation in the 1990s.


Been there, done that. Again, name names. Who do you consider to be
anti-helmet? It is not obvious to me from that context.

And yes, the risk compensation effect is also documented. No wonder
the small-scale studies have such absurd confidence intervals and fail
to agree on the scale of the supposed benefit!


Well, as I said, you were just arguing the opposite.


No, I said that both factors come into play and both are pretty much
unmeasurable. The 2003 study by Towner et. al. discusses both
phenomena.

You seem to be trying to claim that only one factor can influence
cyclists at a time.

Also, the date on the study you quote is 1991

And the cited text from Towner et. al. is dated 2002. But Bill - do
you not understand that you have just conceded the point?


No I didn't conced the point. They simply failed to pick a representative
sample and found out after they had done a lot of work, and figured
they'd rather try to publish than perish. At least, it served as a
warning about a factor you need to control for.


No, Bill, what you said was: "You pick a time period where helmet was
predominant mostly among "serious" cyclists and not more casual
cyclists and you find that the casual cyclists make more serious
mistakes and have worse crashes as a result"

That is precisely what was being documented! Can you think of a
case-control study which does not cover a period where helmet use was
more prevalent among serious cyclists and less prevalent among leisure
cyclists? It's true for all of them!

Yes, the
groups of helmeted and unhelmeted cyclists are different. That's what
I said - that is what confounding means.


No it doesn't mean that.


You are wrong. This definition of confounding is used in the IJE
review of observational case-control studies.

There is no causal affect that makes
non-helmeted riders more accident prone than helmeted riders. If
(before helmets became popular) you took a sufficiently
large random sample of equally skilled cyclists, and gave a
reasonable fraction of them helmets, your "difference" would go
away.


There doesn't need to be a causal relationship, all it requires is
that unhelmeted riders are less risk-averse than helmeted riders.
Both head injury and helmet use are socio-economically stratified. In
the one case the relationship may be causal (helmets cost money), in
the other the two factors are likely common effects of a joint cause.
It doesn't matter: the effect exists nonetheless.

Where case-control studies have recorded the socio-economic background
of riders, they have documented this. Spaite regards the differing
injury profiles of helmeted and unhelmeted riders as one of his most
significant findings.

Any way you slice it, in a population where helmet use is voluntary,
the helmeted and unhelmeted communities are likely to be different in
more ways than just helmet use. And in a population where it is
compulsory, we find that there is no effective difference between
helmet use and nonuse in those who do not wear them voluntarily.

For example, those from
lower socio-economic groups are more likely to suffer head injury (and
road traffic injury) than those from higher groups, independent of
cycling. They are also less likely to wear helmets.


So what? You just don't compare unskilled cyclists riding on quiet
residential streets wearing helmets to simlarly unskilled cyclists
riding on busy urban streets in low income areas where unlicensed
drivers and cars with ill maintained brakes are far more common.


You're right to say you /shouldn't/ compare these groups, but the most
widely quoted study /does/ and so do others.

You say that the studies compare serious cyclists with non-serious -
precisely!


I said those studies are inherently unreliable.


OK, so you consider observational case-control studies to be
unreliable due to underlying differences between the populations, but
you don't want to call it confounding, and you don't want to say that
it is the differences between populations which make them unreliable,
and you're not going to let anyone say that their conclusions are
wrong. That's pretty mixed up, in my view.

Confounding - self-selection bias to give it another common name - is
inherent in observational case-control studies. Which is why Pettiti
recommends the following approach to such data:


I posted a definition for you and you are ignoring it. Here it is
again


Yes, you posted *a* definition. It is not the only one.

It is always
possible that despite an experimenters best intentions there
was some unsuspected systematic differences in the way the
groups were treated in addition to the intended treatment
conditions. Statisticians describe systematic differences of
this sort as confounding factors or confounding variables."


The confounding, you see, could be in the composition of the groups
just as well as in the treatment of them. Simple, isn't it?

The operative usage of "confounding" is the primary one in my
dictionary, meaning to cause confusion.

Here's another definition for you:

1. A situation in which the effects of two or more processes
are not separated; the distortion of the apparent effect of
an exposure on risk, brought about by the association with
other factors that can influence the outcome.

2. A relationship between the effects of two or more causal
factors observed in a set of data, such that it is not
logically possible to separate the contribution of any single
causal factor to the observed effects.

Describes it perfectly. Taken from
http://cancerweb.ncl.ac.uk/cgi-bin/o...ion=Search+OMD

Other definitions exist too.

Note the first sentence. If you are not doing that, you don't have
a well-designed experiment in the first place.


Correct. Oh, so very correct. And yet you persist in believing the
conclusions of those poorly-designed experiments and repudiating any
data which disputes them! A mystery.

rest of Guys's rant snipped


Usual challenge: detail the text sufficiently immoderate to be
characterised as a rant.

I wonder who you think you are fooling?

Guy
--
May contain traces of irony. Contents liable to settle after posting.
http://www.chapmancentral.co.uk

88% of helmet statistics are made up, 65% of them at CHS, Puget Sound
  #218  
Old June 1st 05, 11:32 AM
Just zis Guy, you know?
external usenet poster
 
Posts: n/a
Default

On Wed, 01 Jun 2005 11:20:37 +0100, "Just zis Guy, you know?"
wrote:

D'oh! Timelapse due to checking sources.

it notes that the
predictions from such studies are enormously higher than predicted by
the case-control studies.



should read: it notes that the predictions from such studies are
enormously lower than those found in the sample population.

But this is obvious from context, and you wouldn't be arguing the toss
without your copies of the two studies in front of you, would you?

Guy
--
May contain traces of irony. Contents liable to settle after posting.
http://www.chapmancentral.co.uk

88% of helmet statistics are made up, 65% of them at CHS, Puget Sound
  #219  
Old June 2nd 05, 04:35 AM
Bill Z.
external usenet poster
 
Posts: n/a
Default

"Just zis Guy, you know?" writes:

On Wed, 01 Jun 2005 05:39:39 GMT, (Bill Z.)
wrote:

Before replying, I'll note that Guy is now trying to make a big deal
about his "counfounding factors" (when it is really just a poor choice
of samples) and specifically complaining about differences in helmeted
versus non-helmeted cyclists.


Who would have thought it? When discussing confounding factors, I
make a big deal about confounding factors! Amazing.

No, Bill, it's not "poor choice of sample", it's confounding.


No, it is a poor choice of samples. If you want to study the
effects of helmets in such studies, you want the helmeted versus
non-helmeted cyclists to be the same in all other respects.

The profiles of the helmeted and unhelmeted riders in the studies
are different: both helmet use and injury are socio-economically
stratified (there is plenty of evidence for that), so the difference
is not down to poor choice of samples, it is inherent in the study
populations.


No it isn't inherent.
red herring snipped


So, I'm going to give Guy a chance to
prove his integrity. In a previous incarnation of this dicussion Tom
Kunnich heaped accolades on a study by Paul Scuffham calling it a
"watermark study" (see Message ID ).
This study did not track which cyclists who were in accidents wore
helmets and which did not (the data was simply not available from the
sources Scuffham used).


Keep beating, Bill, there's still a vestige of the bloody smear where
that dead horse used to be!


Uh huh. In other words, you refuse to apply the same standard to
all studies.


When such shortcomings were pointed out, the anti-helmet group went
into overdrive denying it and calling me the usual assortment of
names.



Well, Guy, now is your chance. By our own argument, you should be
willing to state that the praise heaped on this study was not
in the least bit warranted, and that Tom Kunich, Frank Krygowski,
and company were completely wrong. Will you do that? Or are you
just arguing to push an agenda? Inquiring minds want to know.


Hmmmm. Tricky.

Scuffham in 1997: "Results revealed that the increased helmet wearing
percentages has had little association with serious head injuries to
cyclists as a percentage of all serious injuries to cyclists for all
three groups, with no apparent difference between bicycle only and all
cycle crashes."

Scuffham in 2000: "We conclude that the helmet law has been an
effective road safety intervention that has lead to a 19% (90% CI:
14%, 23%) reduction in head injury to cyclists over its first three
years."

How to account for the difference? Aha! Figure 3 in the 2000 study
shows the problem.


Except Tom's comments were about a study performed before 1997, much
less before 2001. Now, what is your opinion of the level of praise
heaped on Scuffham's earlier work?

Looking at figure 3 we see a steadily (and uniformly) declining
trend over time running from 1988 to the most current data used in
the study, 1997. There is a lot of noise due to the very small
sample sizes used, but the trend is very clear and can be accounted
for using statistical regression techniques. What happens to the
19% figure when the regression techniques are applied? It becomes
zero within the limits of sampling error. The 19% figure is clearly
the result of poor selection of data points.



Gee. When I pointed out the small sample size previously, the others
whined loudly. Now's your chance to criticize Tom and Frank. Will
you?

Mystery solved.


Nope. You are weaseling out of the question. Will you now criticize
the unwarranted praise Tom, Frank, and others heaped on Scuffham's
earlier study, particularly the way these anti-helmet usenet posters
tried to claim this study "proved" that helmets don't work.

I repeatedly pointed out that a null result due to a small sample
says nothing about helmets and they howled and howled about that.
you advance no credible reason to justify your
refusal to believe what is written in the reports themselves, to whit:
that the groups of helmeted and unhelmeted riders differed in more
ways than just their helmet wearing rates.


Sigh. As I pointed out, to study helmet effectiveness, you *have*
to pick two groups that differ only in their helmet-wearing rates.
This is "How To Do An Experiment 101"."


And is impossible to achieve in practice. Why do you have such
difficulty accepting the study authors' own statement that there are
differences between the helmeted and unhelmeted populations?


I'm not objecting to the authors saying that. I'm objecting to your
spin on it.

lines and lines of repetitive garbage snipped.

--
My real name backwards: nemuaZ lliB
  #220  
Old June 2nd 05, 06:06 AM
external usenet poster
 
Posts: n/a
Default



Bill Z. wrote:
"Just zis Guy, you know?" writes:

Looking at figure 3 we see a steadily (and uniformly) declining
trend over time running from 1988 to the most current data used in
the study, 1997. There is a lot of noise due to the very small
sample sizes used, but the trend is very clear and can be accounted
for using statistical regression techniques. What happens to the
19% figure when the regression techniques are applied? It becomes
zero within the limits of sampling error. The 19% figure is clearly
the result of poor selection of data points.



Gee. When I pointed out the small sample size previously, the others
whined loudly. Now's your chance to criticize Tom and Frank. Will
you?


Such dishonesty!

You had claimed the entire population of New Zealand was too small to
show any benefit for helmets. That's not at all what Guy is saying
above.

Guy is pointing out that there is noise, but he is OBVIOUSLY not saying
that nothing can be determined, as you claimed. Note the words "the
trend is very clear and can be accounted for using linear regression
techniques." That means you can tell what's going on, despite the
noise.

Feel free to be rude, Bill - it's your nature. But don't be dishonest.
When you stoop to that, you're no longer fun to read! And I'm sure
you don't want to deprive others of the entertainment you generously
provide! ;-)

- Frank Krygowski

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Helmet propaganda debunked [email protected] Racing 17 April 27th 05 04:34 PM
Ontario Helmet Law being pushed through Chris B. General 1379 February 9th 05 04:10 PM
published helmet research - not troll Frank Krygowski Social Issues 1716 October 24th 04 06:39 AM
Reports from Sweden Garry Jones Social Issues 14 October 14th 03 05:23 PM
Helmet Advice DDEckerslyke Social Issues 17 September 2nd 03 11:10 PM


All times are GMT +1. The time now is 09:56 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 CycleBanter.com.
The comments are property of their posters.