Technical Program: Event Detail

Monday, September 13, 12:00 - 15:00 EDT (UTC -4)
Voxelwise Modeling: a powerful framework for recovering functional maps from fMRI data
Fatma Deniz, Jack Gallant, Matteo Visconti di Oleggio Castello, Tom Dupre la Tour, Mark Lescroart

Voxelwise Modeling (VM) is a neuroimaging analysis framework that is used to create detailed functional maps of the human brain. VM creates explicit encoding models of sensory and cognitive processes that map experimental stimuli and tasks onto voxel activity measured by functional magnetic resonance imaging (fMRI). Several of the design characteristics of VM make it more powerful than classical fMRI data analysis methods. VM requires fewer assumptions than most other methods because model parameters are learned directly from the data. Voxelwise models can be fit to brain activity in response to naturalistic stimuli, increasing ecological validity compared to classical experiments. A wide range of stimulus properties can be extracted from the same naturalistic stimulus and studied in the same dataset, making VM more efficient and cost-effective for hypothesis testing. Model generalization is tested directly on a held-out dataset, reducing the problems of null-hypothesis statistical testing. Finally, the entire analysis can be performed within individual subjects, allowing replication in each individual, and preserving subject-specific anatomical and functional differences. In this talk, I will provide an overview of VM and its application in many domains of cognitive neuroscience and make comparisons to more classical fMRI data analysis methods. The tutorials will provide a detailed understanding of the methodology with hands-on examples created in himalaya, a new VM python package that we developed. The talk and tutorials will teach participants how to apply the voxelwise modeling framework start-to-finish, from experimental design to data visualization.

Questions/Comments from Live Session
Q-1: Thanks for the great talk! Sorry I missed how the feature regressors are created and how they account for BOLD response delay. Could you explain?

A-1: The features are implemented as an FIR filter so the delays are dealt with during estimation. This will be covered in the tutorial.

***

Q-2: such a great method - nice talk! can you talk more about the ways that you relate the results across participants beyond qualitative simiarity?

A-2.1: Comparing results across subjects w/o data loss is a hard problem. The standard way to do it is to project to a common space, but that is lossy. We do that too sometimes but as I said it loses data. Another way to do this is to build an explicit model that links the subjects to the group. Our 2016 paper did this but it was too complicated, we are trying to come up with simpler methods.

A-2.2: It’s also possible to compute summary metrics for the same ROIs across subjects. Basically, all the methods that are used to compar results across subjects in other framworks can be used in voxelwise modeling as well.

***

Q-3: To follow up - there is great power to find fine distinctions in areas that are really heterogeneous across participants (ie pfc), so I’m wondering whether there is a way to keep this information

A-3: Individual differences are actually the greatest part of the variance, the group model is a minority of the variance. Matteo is almost finished with a great paper on this that illustrates the problem and how to deal with it.

***

Q-4: What gets appended on the start of the time series with the delayer?

A-4: We append zeros. We could also remove the first time samples, but it usually does not change a lot the results.

***

Q-5: how do you define/design the features you use as regressors? I can imagine this being a crucial step between finding something vs nothing? Also, how do you interpret voxels with low predictability? Low information content? Wrong feature design? Something else?

A-5: Great question, somewhat long answer: Yes, feature definition is crucial. Choice of features is precisely how hypotheses enter into this framework: each set of features embodies a different hypothesis, and those are adjudicated between by comparing the models fit to each set of features. Features are defined differently for each experiment; they can be hand-labeled, computed from computer vision or natural language processing algorithms, or derived directly from the parameters used to generate the stimulus (if, as in my experiments, there are any generative parameters or meta-data for the stimuli)

Low predictability can result from bad signal or a bad model - we will describe in the tutorials the way we attempt to adjudicate between these by computing a noise ceiling (we generally exclude voxels from analysis that have very low noise ceilings).