Kevin Hill @cyborgtribe writes...
a guest Jul 14th, 2016 269 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
- Hey, I don't know if you take comments, but as I'm an ex-fMRI researcher I clicked on the '40k fMRI experiments may be invalid' link as soon as I saw it.
- Take home picture: like a lot of main stream press about fMRI, they get almost everything wrong, though the underlying study is still probably useful to those in the field.
- If you really want to know the ugly details, read on, but I'd forgive you for just ending it here and taking my comments with a grain of salt =)
- It has long been known that fMRI has a statistical issue. We want to know where in the brain activation is, and to get high resolution images we need to measure activity in lots of places, and to determine if each location is 'active' and by how much, we usually run multiple-linear-regression on data from each of those location. So, with lots of locations and lots of statistical tests, we are bound to get some false positives. The question is how many. There are lots of different ways of correcting for these multiple comparisons, almost all modern papers use one method or another for dealing with with this issue, and some methods are more conservative and some are more permissive. The research paper discussed in the article you cite (direct link http://www.pnas.org/content/113/28/7900.full) just takes a look at one of them 'cluster defining threshold' (CDT). None of my papers ( http://www.ncbi.nlm.nih.gov/pubmed/?term=Hill+KT%2C+Miller+LM) report results using CDT, and at least in my experience the voxel-wise methods are a bit better accepted (those being FWE or FDR). On top of that the paper referenced in the article actually finds that the default parameters for CDT in SPM are actually pretty good, and do indeed limit false detection to ~ 5%, though indicate that one might want an even slightly lower CDT parameter. So, in summary this will be a great paper for experts in the field to read. It will help them both evaluate current controversies in the field, and help them improve their own methods. But, if you aren't a currently practicing neuroscientist, probably your reaction should be something along the lines of "looks like science is doing its slow, imperfect job at self-improvement"
RAW Paste Data