Now delete the 'a' in the stuff before gmail.
Monday, May 9, 2016
How not to report science?
Not sure if you saw John Oliver last night, but his weekly rant was about problems with how scientific experiments are conducted (too few replication studies to confirm results, as it's not newsworthy or underfunded; too small sample sizes; relies on tests with mice) how it's reported in press releases (over simplified or dramatized), and how it's chewed out by popular journalists (no examination of the viability of the methodology, just running with a tidbit which will generate eyeballs).
I can personally relate to some of what Oliver points to (my thesis advisor wanted me to delve into my data to find some stray way of reducing the error term and increasing the statistical significance of my results from 90%, good enough for my degree, to 95%, good enough for publcation, ooh baby). But I'd also love to know what proportion of the published results really deserve his mockery. I'm sure the number is high, but the number of studies in the journals etc and research experiments where this doesn't happen is probably also very high.
From my own, singular experience (singular meaning I have zero degrees of freedom, that being n minus one, where n equals one) we did not gussy up our findings on our study of truck drivers and harassment a bit. Not only did we fail to reject the null hypothesis (which says OK we found a difference -- we said we didn't) we also did what we could to make sure a difference wasn't hidden by other factors.
Further, the trucking industry publications reported our non-results dutifully, directly quoting our major conclusion that we didn't see anything substantial. None of them did anything to contradict what we'd written, even though there was a section of the report on the study's limitations.
Home | 10:00 PM (DISCLOSURE: I work for Abt SRBI. My company does polling. My opinions should not be construed as representing those of my employer.)