Invited speakers

Chris Baker

National Institute of Mental Health, USA

Embracing the complexity of visual understanding

Light falling on the retina triggers neural activity that is propagated along sub-cortical and cortical pathways to ultimately elicit the perceptual experience of a meaningful world that is full of people, places, and things. The standard view of visual processing posits a hierarchical, feedforward system in the service of recognition or classification with cortical regions specialized for processing specific categories of visual stimuli. Here, I will argue that we need to move beyond this simple model to consider the complexity of i) the visual input, ii) the underlying anatomy, and iii) the range of behaviors that are supported by processing in visual cortex. In particular, I will focus on recent work in which we employed dense sampling of perceptual spaces combined with data-driven computational approaches to elucidate the relationship between neural and behavioural representations. First, using a large-scale database of images from over 1500 object concepts (“THINGS”) and behavioral responses across nearly 4.7 million trials, we identified 66 highly reproducible and meaningful object dimensions that we linked to spatiotemporal neural dynamics with MEG and fMRI. Second, by densely sampling colors (360 hues, 3 levels of luminance), we reconstructed color space in individual observers directly from neural signals measured with MEG and linked those temporal dynamics with behavioural colour judgments. Collectively, this work highlights how we can embrace the complexity of visual processing, prioritizing behavior and accepting the underlying heterarchical nature of the visual system.

Paolo Bartolomeo

Paris Brain Institute (ICM), France

Vision in the mind’s eye

The dominant neurocognitive model of visual mental imagery stipulates that mental images are projected onto retinotopically mapped regions, including the early visual cortex (Kosslyn et al., 1995; Pearson, 2019). However, detailed case reports of neurological patients with perceptual deficits such as cortical blindness (Goldenberg et al., 1995) or visual agnosia (Bartolomeo et al., 1998; Behrmann et al., 1992; Servos & Goodale, 1995) and intact visual mental imagery abilities challenge the perception/imagery equivalence model (Bartolomeo et al., 2020).
Nevertheless, these dissociations of performance could depend at least in part on the use of strategies different from visualization: for example, to complete the task at hand (e.g., state from memory whether an uppercase A is larger at the top or at the bottom) patients can use spatiomotor, nonvisual strategies (Bartolomeo et al., 2002). Importantly, however, visual knowledge of colors can hardly benefit from such strategies, as color is a domain that is typically nonspatial. Preserved color imagery in patients with acquired cerebral achromatopsia (Bartolomeo et al., 1997; Shuren et al., 1996) indeed provides particularly compelling evidence against the claim that perception and imagery mechanisms completely overlap in the brain (Bartolomeo et al., 2023). On the other hand, perusal of the literature revealed that impaired visual mental imagery does not typically arise as a result of occipital lesions (Bartolomeo, 2002), but rather follow extensive damage to the temporal lobe of the left hemisphere (Bartolomeo, 2008).
Neuroimaging evidence helped specify these findings. A meta-analysis of 27 fMRI studies of visual mental imagery (Spagna et al., 2021) unveiled imagery-related activity not only in frontoparietal areas, where it had been detected by earlier studies (Mechelli et al., 2004), but also in a previously undescribed functional region of the left Ventral Temporal Cortex (VTC), which we called the Fusiform Imagery Node (FIN). FIN activity was observed across all imagery tests, regardless of the specific domain of imagery (colors, faces, etc.). Bayesian analysis determined that there was no evidence whatsoever of imagery-related activity in the early visual cortices. While challenging the prevailing model of visual mental imagery, this pattern of brain activity was clearly consistent with the causal evidence from neuropsychology. Furthermore, it pinpointed the FIN as a plausible critical region within the left temporal lobe, the impairment or disconnection of which could underlie domain-general imagery deficits in neurological patients (Bartolomeo, 2021).
To specify the brain networks of visual mental imagery, we used 7T, ultra-high field fMRI (Liu, Zhan, et al., 2023), together with a behavioral imagery / perception test battery (Liu & 2 Bartolomeo, 2023). We observed the activation of (1) the relevant domain-preferring VTC areas, (2) the domain-general area FIN, and (3) frontoparietal networks important for working memory and attention (Seidel Malkinson et al., 2024), whose activity was temporally correlated with FIN. Color perception nicely replicated the posterior, central and anterior color-biased patches identified by Lafer-Sousa et al. (2016). Importantly, however, color imagery exclusively engaged the anterior patch, in conjunction with the FIN. Once again, it was demonstrated that visual mental imagery only partially overlaps with visual perception, and this overlap primarily occurs within the high-level visual cortex of the VTC. These findings provide converging evidence that visual experience in both perception (Liu, Bayle, et al., 2023; Spagna et al., 2023) and mental imagery relies on the coordinated activity of high-level visual cortex and frontoparietal networks.
Thus, causal evidence derived from individual cases of brain-damaged patients continues to be a vital element in cognitive neuroscience research. It serves as both an inspiration for neuroimaging studies and a constraint for theoretical modeling.

Picture Copyright: Astrid di Crollalanza

Jody Culham

Western University, Canada

Immersive Neuroscience: Bringing Cognitive Neuroscience Closer to the Real World

As a relatively young field, human cognitive neuroscience has relied heavily on reductionist approaches to understanding human brain function. Compared to the natural environment, neuroscience experiments typically utilize simplified stimuli (often images), tasks (often stimulus-response tasks), and experimental situations aimed to isolate putative cognitive processes. Ultimately, however, neuroscience and its applications must understand and enhance brain function in the real world.

This talk proposes a new approach called Immersive Neuroscience, which examines brain function in the real world and compelling simulations such as simulated 3D, virtual reality (VR), and gaming environments. The goal of Immersive Neuroscience is to maximize the realism of the stimuli presented, the tasks that participants perform, and the interplay of cognitive processes.

The talk will provide a theoretical context for why realness matters and how neuroscience can incorporate and validate potential experimental proxies for reality. I will also review a series of experiments that demonstrate that realness matters for the choice of stimuli, tasks, and experimental complexity. I will end with a general discussion of the opportunities and challenges provided by an immersive neuroscience approach.

Roland Fleming

Giessen University, Germany

Grasping

The human hand is extraordinarily dextrous. In everyday life, we effortlessly pick up and use objects without a second thought. Yet even state-of-the-art robots struggle to achieve anything close to the robustness, flexibility or precision of human grasps. Successful grasping requires selecting—out of all possible points of contact between the hand and object—the tiny subset that yields effective outcomes. The pose of the hand must obey certain constraints to enable a grasp that is both comfortable and stable. I will present a combination of theoretical and empirical studies on how humans select both precision grip and whole-hand grasps, using a combination of MoCap and deep learning to reconstruct the regions of contact between the hand and objects. The findings show how participants exploit the biomechanics of the soft tissues of the hand, and how grasping is modulated by the task to be performed.

image credit: Lina Klein

Angelika Lingnau

University of Regensburg, Germany

Hans Op de Beeck

KU Leuven, Belgium

Visual categories in the brain are alive and kicking: Bodies and faces rule object space

Decades ago, the first category-selective region in the human brain was described: The fusiform face area. Afterwards, neuroimaging studies have uncovered a complex landscape of focal and distributed selectivity in lateral occipital and ventral occipitotemporal cortex, with some apparent discrepancies in terms of what the primary organizational principles are. I will discuss how we can explain the organization of this large brain region in terms of the interplay of multiple factors such as visual statistics, computational constraints, and behavioral goals. Together, these factors once again point to visual categories as a main organizational principle in the human brain. Empirical studies that dissociate category membership from other relevant dimensions support this primary role of category selectivity.

Liuba Papeo

Institute of Cognitive Sciences Marc Jeannerod, France

Beyond animate-inanimate: social-nonsocial in human vision

The distinction between animate and inanimate objects is fundamental to the human conceptual framework. This distinction is also a main principle for organization of information in human visual cortex, allowing for the efficient detection and recognition of animates, or animals. However, humans' natural habitat is not the animal kingdom but the social world. And, in the social world, relevant entities are not just animals, or animates, but social beings. Thus, human life, and the very human nature, requires more than the ability to distinguish animate from inanimate: it requires the ability to distinguish social from nonsocial. I will present behavioral and neuroimaging research showing that the social-nonsocial distinction is a principle of organization of information in visual perception (and in visual cortex), which reveals itself early in the behavior of humans –as well as other socially gregarious species such as monkeys and chicks.