Invited speakers
University Medical Center Hamburg-Eppendorf, Germany
Minds in motion.
Traditionally, it has been assumed that we possess privileged access to the states internal to our own bodies—an access that cannot be extended to others. Yet, observing other bodies in motion offers a powerful window into their cognitive states. In this talk, I will introduce kinematic coding, an experimental and computational framework developed in my lab to quantify how cognitive state information is both encoded in and read out from movement kinematics. I will explore the potential of this approach for capturing the social transmission of information during naturalistic behavior and its relevance for action prediction.
University of Pittsburgh, USA
The development, hemispheric organization, and plasticity of high-level vision
Adults recognize complex visual inputs, such as faces and words, with remarkable speed, accuracy and ease, but a full understanding of these abilities is still lacking. Much prior research has favoured a binary separation of faces and words, with the right hemisphere specialized for the representation of faces, and the left hemisphere specialized for the representation of words. Close scrutiny of the data, however, suggest a more graded and distributed hemispheric organization, as well as differing hemispheric profiles across individuals. Combining detailed behavioral data with structural and functional imaging data and intracranial recordings reveals how the distribution of function both within and between the two cerebral hemispheres emerges over the course of development, and a computational account of this mature organization is offered and tested. Provocatively, this mature profile is more malleable than previously thought, and cross-sectional and longitudinal data acquired from individuals with hemispherectomy reveal how a single hemisphere can subserve both visual classes. Together, the findings support a view of cortical visual organization (and perhaps, the organization of other functions too) as plastic and dynamic, both within and between hemispheres.
Giessen University, Germany
Exploring and perceiving material qualities
Material qualities, like stickiness, softness, or glossiness, play an important role in everyday decisions of how to interact with objects. For example, we would certainly settle into an upholstered plush armchair differently than into one carved out of marble, similarly, we would grasp a wet coffee mug differently than a dry one. Past research has shown that humans are amazingly good at estimating optical and mechanical material qualities from images alone, and one of the important questions in material perception is: how do we do this? Past research, including my own, has focused on identifying image regularities (visual cues) that may signal the presence and markedness of a particular material quality. For example, we and others have shown, that image motion can be quite a powerful source of information that conveys material qualities, such as shininess, wobbliness or brittleness. Such visual cues may emerge by simply observing statistical regularities in the environment (e.g. presence of highlights on perfectly smooth surfaces) and also, by actively seeking out and producing visual information that helps attaining a specific goal (e.g. moving the object, or moving around the object to see highlights and to produce highlight motion). In this talk I will discuss our recent work that focuses on the role that interactions might play in forming visual representations of material qualities, and show how exploratory hand movements can yield visual information about the material of an object.
Photo: Bilkent University / Burak Tokcan
University of Pennsylvania, USA
Cognitive maps, navigational strategies, and the human brain
Every day, we navigate between places in our extended environment—our home, the office, a café, the corner store. To do this successfully, we must be able to recall where these places are in the world, and we must have a strategy for using this spatial knowledge to choose efficient routes. How are these core elements of spatial navigation implemented by the human mind/brain? In this talk, I will describe recent studies from my lab that attempt to answer this question. To understand the organization of spatial memory (“cognitive maps”), we performed studies in which participants were familiarized with virtual environments containing several objects and then underwent fMRI scanning while recalling the spatial relationships between the objects. Behavioral and neuroimaging results revealed that participants’ cognitive maps were not simple Euclidean reference frames as classical theories would suggest; rather, they exhibited an articulated structure that reflected the segmented and hierarchical organization of the environment. To understand the strategies that people use to choose a route to a destination, we scanned participants with fMRI while they performed a “taxi-cab” task that required them to search for passengers and then deliver them to several possible goal locations within a virtual city. We found that participants’ behavioral decisions could be explained by a predictive coding (successor representation) model; moreover, fMRI signals in the hippocampus and neocortical spatial memory regions tracked the computational components of the model. Together these findings illuminate the representations that people form of environmental spaces and the algorithms they use to navigate to goal locations within these spaces.
University of Lyon, France
Growing technological opacity and the social brain
Thanks to our remarkable ability to transmit technical content, our technologies have become more sophisticated. Intuitively, one might assume that this evolution has imposed greater demands on the technical brain. However, recent neuroscientific research suggests that this evolution has also increasingly engaged the social brain to address the opacity it has generated in making, transactive, and use processes. This article builds on these findings to design a neurocognitive framework that outlines the role of the social brain in (1) facilitating the transmission of making processes, (2) relying on human experts as extensions of our technical cognition, and (3) engaging with certain technologies as if they were intentional biological agents.
University of Torino, Italy
Vision without awareness: from multiple pathways to global brain reorganisation
Damage to the primary visual cortex (V1) leads to clinical blindness, but several functions may persist without awareness – a condition known as blindsight. Because blindsight has been documented across primates, it offers a comparative window onto the V1-independent pathways that are also present in the intact brain and onto the plastic reorganisation that follows injury.
After a brief historical sketch, I will present converging behavioural, diffusion-tractography and fMRI evidence that residual non-conscious vision for action and emotion is mediated by transcallosal recruitment of the intact hemisphere.
Leveraging resting-state fMRI and an information-theoretic measure of joint entropy in human and macaques, I will present recent evidence showing that blindsight-positive participants preserve the brain's canonical sensory-to-association hierarchy of functional connectivity. Conversely, subjects without residual vision display a flattened profile marked by heightened entropy in visual–somatosensory networks centred on the pulvinar. The integrity of this hierarchy tracks non-conscious detection accuracy, indicating that preserved, hierarchically organised connectivity, not lesion extent per se, determines whether visual information can be exploited non-consciously to guide behaviour.
These findings position blindsight as a valuable model for distributed visual processing, thalamo-cortical interplay and large-scale cortical plasticity.
KU Leuven, Belgium
Mesoscale Functional Organization of the Macaque Cortex
Recent advancements in high-resolution non-invasive imaging in humans have confirmed the presence of columnar structures that were previously identified in monkeys. These fMRI studies have sparked optimism in uncovering mesoscale functional structures in largely uncharted areas of the cortex. Using implanted phased-array coils and contrast-agent-enhanced fMRI, we began mapping the rhesus monkey cortex with 0.6 mm isotropic voxels, covering the entire brain and employing a wide range of visual stimuli. In this presentation, I will showcase novel, unpublished data revealing highly consistent mesoscopic functional units (MFUs) across the occipital, parietal, temporal, and frontal cortices. I will argue that the fine-scale functional organization of the primate neocortex is far more intricate than what is suggested by current parcellation models, which rely on electrophysiology, tractography, and low-resolution fMRI data. Identifying and functionally characterizing the complete MFU "alphabet" will be crucial for understanding vision and cognition.