Perhaps the most cutting criticism of brain imaging studies is the one with the dead fish.
the voxels representing the area where the salmon’s tiny brain sat showed evidence of activity. In the fMRI scan, it looked like the dead salmon was actually thinking about the pictures it had been shown.
Which is to say, with 130k datapoints per scan, some of them are going to be garbage, and it’s important to make some effort to filter them out.
Yes. Thank you.
I work with fMRI data; it’s a significant part of my job. Everyone who works with our fMRI scanner knows the Salmon Paper – we have it printed out and pinned up on one of the noticeboards. It’s the source of much appreciation and affection. (I’m pretty sure one of the guys in the lab came to a Hallowe’en party dressed as said salmon.)
It’s also the paper everyone, of late, seems to dragging out to say "fMRI studies are rubbish! They’re useless! Never trust fMRI data!” without considering that, actually, that’s not what this paper intended to do.
fMRI studies have their limitations, yes. It’s correlational data; it’s superb spatial resolution with utterly terrible temporal resolution; it provides enough evidence to say something like “we observed [process] happening in [region of the brain],” but is not usually strong enough to say “[process] happens solely and definitively in [region]”. It 100% has its issues, and you always gotta handle it with care.
But what the Salmon Paper was looking to highlight was not the inherent flaws with the methodology, which are pretty well-understood; it’s the flaws in the statistics that people add in afterwards.
Work with any kind of statistics for any length of time, and there’s one lesson you learn very early on: correct for multiple comparisons. To make a long story almost insultingly brief, when you run lots and lots of statistical tests, you inflate the chance of getting a false positive, to the extent that you’re practically guaranteeing at least a few of them. It’s just in the nature of the game. So what you have to do is add in some extra stuff – like the Bonferroni correction, for example. What this does is take into account the fact you’re running a bunch of stats, and by being mega-strict it reduces the occurrences of false-positives. I’ve taught first-year undergrads this; it’s pretty integral if you’re working with T-Tests or ANOVA tests.
fMRI data, incidentally? A lot of the time, when analysed, it’s a whole bunch of T-Tests. Like, in the quadruple digits. That’s a lot of chances for a false positive to crop up, and oh boy, do they.
That’s what you’re seeing in the Salmon Paper. Those areas of brain activation are where so many tests have been run that you get a bunch of brain areas appearing to light up at random. The authors of this paper ran their stats deliberately without correction for multiple comparisons, and this is what you get – one surprisingly thinky dead salmon.
Correcting for multiple comparisons is exactly the filtering mechanism used to try to separate the garbage from the interesting neuroscientific data, and this paper is nice little warning that it isn’t optional.