My goal is to perform a whole-brain analysis, so I went to the MVPA literature, and it looks like the best bet is a variant of MVPA called Feature Reduction. There are alternatives, which base analysis on predefined anatomical regions of interest, or actual whole-brain analysis, or a multi-voxel searchlight method. However, each of those alternatives misses the target by a little bit. ROI-based approach is susceptible to differences between individuals, and their anatomical sub-structures; ultimately you are making a guess about what part of the brain does something, and excluding alternatives with that guess. The whole-brain approach gives the machine learning algorithm too many false inputs - most learning programs will overfit noise and subsequently be very poor at predicting data they have not been trained on (the goal of this analysis family). Finally, the the searchlight method drifts around the brain and looks for the best sphere of a pre-specified volume, which can learn input the best; but there is no apriori reason that you'd expect a sphere, cube, or pyramid of any size in the brain to be the area of activity. The searchlight critically fails when regions on opposite sides of the brain (like the bilaterally distributed language areas) are active predictors, since these areas are too far apart for a sphere to include.
The approach I wish to adopt recursively shrinks the number of voxels included in the analysis, until reducing the number no longer improves the result on training data. Then it tests is predictive accuracy on a set of test data, to see how well the chosen brain areas can predict the stimulus that the person saw or heard. It is an ideal type of classification analysis, because it can be more sensitive than univariate statistical methods - by some accounts - and unlike other classification methods, it does not require a pre-specification of where to search, or even how many areas to look at simultaneously. Recursive Feature Elimination is the goal.
I was going with the Princeton toolbox, but came across a rather convincing paper that argues many advantages of PyMVPA. Among them are "easy to install", ability to integrate many established, existing data mining approaches (without hard-coding them), freeware, open-source, and it works on any platform. The PyMVPA manual has a picture of a dude performing pattern-classification on fMRI data with his freaking cellphone. Awesome.
If you can do it on a cellphone, then I'm set. I have a computer (MAC).
Hours later, I'm wrestling with MAC OS (Leopard). I've gone through a million blogs and help files online, and the missing step in the instructions on PyMVPA is to install XCODE first. If you want to install PyMVPA, you have to put XCODE tools on the computer, so it can compile and make files, the way that PORT wants you to. It took me a couple hours to figure that out, so I'm writing it down here. XCODE is a free download from developer.apple.com. You can register for a free account, download the latest version and install xcode very quickly.
With that done, installing PORT is tricky too. For me, it involved hunting down non-existent files with names like .bash_profile, csh.cshrc, etc., using the LOCATE command in terminal. (Ex. > locate .profile) Finally, I used nano to create a file called .profile, which I defined the PATH variables as given by the PyMVPA website, saved, closed terminal, reopened terminal ... and ... when I told port to install, it works this time! Boo ya!
In summary - figure out how much troubleshooting you like to do. PyMVPA is allegedly very easy to use, once installed, and the Mac install is supposed to automatically update cleanly, for all the trouble involved in installing it. Princeton's MVPA toolbox is trivially easy to install by comparison, but may not be as transparent and is definitely less extensible. Maybe in a month, I'll update this thing to say whether Princeton or Python is the best MVPA tool. Despite that I am not trying AFNI's MVPA tool, it is worth mentioning that AFNI also has a toolbox that allows pattern classification analyses to be performed.
It's still installing, so I'm going to go for a little walk. Later.