Technical Program

Keynotes & Tutorials K&T consist of a keynote lecture, followed by an interactive tutorial on key research methods.

Generative Adversarial Collaborations GAC identify, debate, and make concrete plans to address the most challenging, controversial, and exciting theoretical and empirical debates in our field.

All events will be hosted on Zoom, and all times are in Eastern Daylight Time (EDT). Zoom links will be available a few minutes prior to each event. If you need technical help, please contact info@ccneuro.org.

Tuesday, September 7
Tue, 7 Sep, 12:00 - 15:00 EDT (UTC -4)
ALG
Watch Results from the Algonauts 2021 competition
Event ended 19 days and 23 hours ago

This year's installment of the Algonauts Project Challenge — How the Human Brain Makes Sense of a World in Motion is closed. In this session we will introduce, discuss and evaluate the outcome of the challenge. We will first give a short introduction to the context and gist of the challenge and summarize the outcome. Second, we provide a hands-on tutorial based on the development kit to show the nuts and bolts of the challenge. In a third part the top performing teams will present their solutions in short talks. We close with the opportunity of a common discussion and the announcement of the Algonauts Project Challenge 2022.

Thursday, September 9
Thu, 9 Sep, 09:00 - 12:00 EDT (UTC -4)
GAC
Watch How does visual experience shape representations and transformations along the ventral stream?
Event ended 18 days and 2 hours ago
Maria Bedny, Nancy Kanwisher, Olivier Collignon, Ilker Yildirim, Elizabeth Saccone, Apurva Ratan Murty, Stefania Mattioni

Read Paper

Scientific question: A key puzzle in cognitive neuroscience concerns the contribution of innate predispositions versus lifetime experience to cortical function. Addressing this puzzle has implications that reach far, from plasticity of the neural hardware [1] to representations in the mind and their developmental trajectory [2], and even to building artificially intelligent systems [3,4]. Yet, this is a notoriously difficult topic to study empirically. We propose to tackle this issue in the context of the high-level 'visual' representations through neural, behavioral, and computational studies of individuals who are sighted and congenitally blind. Congenital blindness represents a uniquely tractable and rich model to study how innate substrate and atypical experience interact to shape the functional tuning of the brain. This work aspires to reveal the origins, including the representational and computational basis, of high-level visual representations by addressing the following questions: How does visual experience impact representations and transformations along the ventral stream? How broad is the human brain’s capacity to ‘retool’ in the face of ‘atypical’ experience?

Monday, September 13
Mon, 13 Sep, 12:00 - 15:00 EDT (UTC -4)
K&T
Watch Voxelwise Modeling: a powerful framework for recovering functional maps from fMRI data
Event ended 13 days and 23 hours ago
Fatma Deniz, Jack Gallant, Matteo Visconti di Oleggio Castello, Tom Dupre la Tour, Mark Lescroart

Voxelwise Modeling (VM) is a neuroimaging analysis framework that is used to create detailed functional maps of the human brain. VM creates explicit encoding models of sensory and cognitive processes that map experimental stimuli and tasks onto voxel activity measured by functional magnetic resonance imaging (fMRI). Several of the design characteristics of VM make it more powerful than classical fMRI data analysis methods. VM requires fewer assumptions than most other methods because model parameters are learned directly from the data. Voxelwise models can be fit to brain activity in response to naturalistic stimuli, increasing ecological validity compared to classical experiments. A wide range of stimulus properties can be extracted from the same naturalistic stimulus and studied in the same dataset, making VM more efficient and cost-effective for hypothesis testing. Model generalization is tested directly on a held-out dataset, reducing the problems of null-hypothesis statistical testing. Finally, the entire analysis can be performed within individual subjects, allowing replication in each individual, and preserving subject-specific anatomical and functional differences. In this talk, I will provide an overview of VM and its application in many domains of cognitive neuroscience and make comparisons to more classical fMRI data analysis methods. The tutorials will provide a detailed understanding of the methodology with hands-on examples created in himalaya, a new VM python package that we developed. The talk and tutorials will teach participants how to apply the voxelwise modeling framework start-to-finish, from experimental design to data visualization.

Tuesday, September 14
Tue, 14 Sep, 09:00 - 12:00 EDT (UTC -4)
K&T
Watch Controversial stimuli: Optimizing experiments to adjudicate among computational hypotheses
Event ended 13 days and 2 hours ago
Nikolaus Kriegeskorte, Tal Golan, Wenxuan Guo

Deep neural network models (DNNs) are central to cognitive computational neuroscience because they link cognition to its implementation in brains. DNNs promise to provide a language for expressing biologically plausible hypotheses about brain computation. A peril, however, is that high-parametric models are universal approximators, making it difficult to adjudicate among alternative models meant to express distinct computational hypotheses. On the one hand, modeling intelligent behavior requires high parametric capacity. On the other hand, it is unclear how we can glean theoretical insights from overly flexible high-parametric models. Here we present one approach toward a solution to this conundrum: the method of controversial stimuli. Synthetic controversial stimuli are stimuli (e.g. images, sounds, sentences) optimized to elicit distinct predictions from different models. Because synthetic controversial stimuli provide severe tests of out-of-distribution generalization, they reveal high-parametric models’ distinct inductive biases.

Controversial stimuli can be used in experiments measuring behavior or brain activity. In either case, we must first define a controversiality objective that reflects the power afforded by different stimulus sets to adjudicate among our set of DNN models. Ideally, the objective should quantify the expected reduction in our uncertainty about which model is correct (i.e. the entropy reduction of the posterior). In practice, however, heuristic approximations to this objective may be preferable. If the models are differentiable, then gradient descent can be used to efficiently generate controversial stimuli; otherwise gradient-free optimization methods must be used.

We demonstrate the method in the context of a wide range of visual recognition models, including feedforward and recurrent, discriminative and generative, conventionally and adversarially trained models (Golan, Raju & Kriegeskorte, 2020). A stimulus was defined as controversial between two models if it was classified with high confidence as belonging to one category by one of the models and as belonging to a different category by the other model. Our results suggest that models with generative components best account for human visual recognition in the context of handwritten digits (MNIST) and small natural images (CIFAR-10). We will also share new results from applications of controversial stimuli in different domains and discuss the relationship of the method of controversial stimuli to adversarial examples, metamers, and maximally exciting stimuli, other types of synthetic stimuli that can reveal failure modes of models.

Controversial stimuli greatly improve our power to adjudicate among models. In addition, they provide out-of-distribution probes that reveal the inductive biases implicit to the architecture, objective function, and learning rule that defines each model. The method can drive theoretical insight because it enables us to distinguish computational hypotheses implemented in models that are sufficiently high-parametric to capture the knowledge needed for intelligent behavior.

Thursday, September 16
Thu, 16 Sep, 11:00 - 14:00 EDT (UTC -4)
GAC
Watch Progress and process summary for GAC 2020
Event ended 11 days and 40 minutes ago

During this session, the six 2020 Generative Adversarial Collaboration (GAC) teams will present progress made during the past year. What scientific progress has been made? How did their collaboration lead to changes in their thinking and advances for the field? Were controversies resolved or deepened, and why? What are the plans for the future?

In the second part of this session, we will have a panel discussion about the lessons learned and challenges of working in a collaboration where not everybody agrees from the outset by design. The GAC members will discuss with each other -- and with the audience -- how to further improve the process to maximize research outcomes and advance science.

Link to the GAC 2020 proposals.

Friday, September 17
Fri, 17 Sep, 11:00 - 14:00 EDT (UTC -4)
GAC
Watch What constitutes understanding of ventral pathway function?
Event ended 10 days and 40 minutes ago
Charles Connor, Gabriel Kreiman, Carlos R Ponce, Carl Craver, Margaret Livingstone, Martin Schrimpf, Binxu Wang

Read Paper

Introduction

At the onset of visual neuroscience, first there was light, and then came explanations. Working to stimulate neurons in primary visual cortex (V1) in 1958, Hubel and Wiesel projected white light onto cat retinas using a modified ophthalmoscope and a slide projector. They had glass and brass slides with drawings and cutouts, using them to shape light into simple geometric patterns. Among their many findings, they established V1 neurons showed higher activity to specifically placed line segments – lines optimized in their location, length/width, color, and their rotation. The simplicity of these stimuli allowed for straightforward interpretations, specifically that V1 neurons signal contour orientation.

Observations vs. interpretations

There were five components that made these experiments canonical, and have been included in most subsequent studies of visual neuroscience:

  1. A physical stimulus (e.g., light patterns on a projection screen/computer monitor).
  2. A generative method for producing the physical stimuli (e.g., lines drawn manually on slides, variables for a computer graphics library, vectors in a generative adversarial network).
  3. An experimenter-labeled stimulus space (e.g., orientation, categories) with a metric to order/cluster the physical stimuli (e.g., angular distance, perceptual similarity).
  4. Neuronal activity associated with each stimulus (e.g., spike rates).
  5. A potential mechanism suggesting how those tuning functions could arise from earlier inputs (e.g., spatially aligned projections from neurons in the midbrain [lateral geniculate nucleus]).
  6. The first and fourth components are observables (we refer to these as pixels and spikes). The second and third components are fundamentally entangled with the experimenter’s theories and interpretations (we refer to these as methods and spaces). The linchpin observation is that in this experimental design, the relationship between pixels and spikes is causal, but the relationship between spaces and spikes is correlational. Theoretically, there can be alternative explanations implicit in any given stimulus space which also affect neuronal activity — in an experiment, the subject’s brain only has access to the physical stimuli, not to the meaning attached to it.

Thursday, September 23
Thu, 23 Sep, 10:00 - 13:00 EDT (UTC -4)
GAC
Watch How does the brain combine generative models and direct discriminative computations in high-level vision?
Event ended 4 days and 1 hour ago
James J. DiCarlo, Ralf Haefner, Leyla Isik, Talia Konkle, Nikolaus Kriegeskorte, Benjamin Peters, Nicole Rust, Kim Stachenfeld, Joshua B. Tenenbaum, Doris Tsao, Ilker Yildirim

Read Paper

Our question is how the primate brain combines generative models and direct discriminative computations in high-level vision. Both approaches aim at inferring behaviorally relevant latent variables y from visual data x. In a probabilistic setting, the inference of the posterior p(y|x) is known as discriminative inference. The two approaches differ in how discriminative inference is implemented. In the generative approach, a model of the joint distribution p(y,x) of the latent variables and the visual input is employed. This model captures information about the processes in the world that give rise to the sensory data. Approximate inference algorithms are then used to infer the posterior over the latents given an image by estimating p(y|x) = p(y,x)/p(x). In the direct discriminative approach, a direct mapping from the sensory data to the posterior over the latents p(y|x) is learned without the use of an explicit generative model. The generative approach enables unsupervised learning of the structure of the world and promises better generalization to novel situations (statistical efficiency). Direct discriminative computations promise faster inferences (computational efficiency) that are accurate for new samples from the distribution experienced in training. In practice, inference of the full posterior may not be realistic and the visual system may settle for point estimates in certain cases.

Friday, September 24
Fri, 24 Sep, 11:00 - 14:00 EDT (UTC -4)
K&T
Watch Flexible identification of population dynamics from neural activity recordings
Event ended 3 days and 40 minutes ago
Mikhail Genkin and Tatiana Engel

Behaviorally relevant signals are often represented in neural population dynamics, which evolve on a low-dimensional manifold embedded into a high-dimensional space of neural responses. Revealing population dynamics from spikes is challenging because the dynamics and embedding are nonlinear and obscured by diverse and noisy responses of individual neurons. For example, the decision-related activity of single neurons was hypothesized to arise from either gradual ramping or abrupt stepping dynamics on single trials, but selection between these alternatives is brittle due to the diversity of neural responses. Moreover, ramping and stepping are impoverished hypotheses for heterogeneous decision-related neural populations. We need frameworks that can flexibly identify neural dynamics from data.

We developed a flexible framework for inferring neural population dynamics from spikes. In our framework, latent population dynamics are controlled by a potential function that can take arbitrary shape, spanning a continuous space of hypotheses. The activity of individual neurons is related to the population dynamics through unique firing-rate functions, which account for the heterogeneity of neural responses. The potential and firing-rate functions are inferred from data. On simulated neurons, our framework correctly recovered the ramping and stepping models, which correspond to linear and three-well potentials, respectively. We applied the framework to neural activity recorded from the macaque dorsal premotor cortex (PMd) during a decision-making task. The inferred potential revealed dynamics that evolve gradually towards the correct choice but have to overcome a potential barrier towards the incorrect choice, inconsistent with the simple hypotheses proposed previously. Our results demonstrate that a flexible approach can discover new hypotheses about population dynamics from data.

The tutorial will provide hands-on experience with optimization and model selection using synthetic spike data. We will offer 3-6 exercises in google CoLab (no installation required) using our Python package NeuralFlow available on GitHub (https://github.com/engellab/neuralflow).

Friday, October 1
Fri, 1 Oct, 12:00 - 16:00 EDT (UTC -4)
GAC
What makes representations "useful"?
Event begins in 3 days and 21 hours
Ben Baker, Richard Lange, Alessandro Achille, Rosa Cao, Nikolaus Kriegeskorte, Odelia Schwartz, Xaq Pitkow

Read Paper

Scientific question: Internal representations play a central role in the study of both biological and artificial intelligence, as well as in philosophy of mind, but what precisely defines a representation is challenging to pin down. Across disciplines, one common thread is that representations are typically “useful” in some sense. Centering around this concept of usefulness, we propose a cross-disciplinary GAC to share ideas and develop more precise answers to the following questions:

  1. What makes representations “useful,” both in terms of their content and their form?
  2. How does the use or downstream causal effect of a representation contribute to its meaning?

We will simplify the scope by primarily discussing these questions in the context of visual perception.