Technical Program: Event Detail

Thursday, September 9, 09:00 - 12:00 EDT (UTC -4)
How does visual experience shape representations and transformations along the ventral stream?
Maria Bedny, Nancy Kanwisher, Olivier Collignon, Ilker Yildirim, Elizabeth Saccone, Apurva Ratan Murty, Stefania Mattioni

Read Paper

Scientific question: A key puzzle in cognitive neuroscience concerns the contribution of innate predispositions versus lifetime experience to cortical function. Addressing this puzzle has implications that reach far, from plasticity of the neural hardware [1] to representations in the mind and their developmental trajectory [2], and even to building artificially intelligent systems [3,4]. Yet, this is a notoriously difficult topic to study empirically. We propose to tackle this issue in the context of the high-level 'visual' representations through neural, behavioral, and computational studies of individuals who are sighted and congenitally blind. Congenital blindness represents a uniquely tractable and rich model to study how innate substrate and atypical experience interact to shape the functional tuning of the brain. This work aspires to reveal the origins, including the representational and computational basis, of high-level visual representations by addressing the following questions: How does visual experience impact representations and transformations along the ventral stream? How broad is the human brain’s capacity to ‘retool’ in the face of ‘atypical’ experience?

Questions/Comments from Live Session
Q-1: Perhaps this is going to come up, but do we have studies of blind people’s FFA while they’re experiencing other peoples’ faces by touching with their hands?

A-1: Great question! No we do not. I’m going to talk more about what we do know about blind people identifying other peoples’ faces by touch later on though :)

Q-2: Were those 4 types of sounds also presented to sighted participants?

A-2: [Answered Live]

Q-3: For Apurva: I wonder how to best account for selection bias in the definition of the face-selective ROI. Using independent data removes any bias due to ROI overfitting to noise. But there will still be bias due to ROI overfitting to the modality of the mapping stimuli. This seems to be a conceptual, not just a technical challenge. Can all modalities be treated symmetrically somehow (e.g. by using mapping rather than ROIs or a different ROI for each modality)? Would be great to hear your thoughts on this.

A-3: We used the GSS parcel approach (https://web.mit.edu/bcs/nklab/GSS_index.shtml) to address that exact issue.. The basic idea is to use a probabilistic map defined on sighted participants, to define the fROI. This can be mapped on each individual participants' brain so we can still leverage the fROI approach (without needing any specific 'localizer' in blind subjects). Plus we also ran some control analyses (included in the Supplementary of our PNAS paper) to address this issue.

Q-4: To what extent is it useful to think about functional connectivity as a cause of the function or activation of a region, versus an emergent property of this function/activation? This may be in contrast to the structural connectivity which could be easier to posit simply as the cause of the activation/function. Perhaps in a model making a distinction between these two directions would feel quite natural?

A-4: This is a good point. For what its worth the correlation fingerprints were derived during a resting condition (resting-state fMRI) condition. But your point is valid and more work needs to be done to really address this question.

Q-5: For Apurva: is it a fair interpretation of your connectivity-based study to say that the face area is more a multimodal area that is driven by non-visual as well as visual inputs, hence the face area activation in blind subjets?

A-5: I think the haptic and auditory activations more naturally lends to that interpretation.. but the comparison with sighted is more clearly presented in the original van der Hurk study.

Q-6: Do we have a theory to what extent other domains take over the space of visual function in blind people?

A-6: This is an interesting question. What would constitute such a theory? Predicting which functions take over? The best we have, I think, is a connectivity based relationship