Invited speakers

MB
Marina Bedny

Johns Hopkins University, USA

Jeffrey Bowers

University of Bristol, UK

Current research practices in NeuroAI do not support the many strong claims of ANN-Human Alignment

Artificial neural networks (ANNs) developed in computer science are successful in a range of vision, language, and reasoning tasks. They can also predict behavioural responses and brain activations of humans better than alternative models. This has led to the common claim that ANNs are the best models of biological intelligence. However, most prediction studies are correlational, and accordingly, do not support causal conclusions. Furthermore, researchers are incentivized to identify ANN-human similarities, as reviewers and editors are more likely to publish studies that report similarities rather than differences. Accordingly, researchers rarely carry out "severe" tests of their claims that are more likely to falsify their conclusions (if indeed the conclusions are false). I show when the relevant experiments are carried out, ANNs do a poor job in explaining human intelligence. The field of NeuroAI needs to change its methods to better characterize ANN-human alignment and build better models of minds.

TK
Tim Kietzmann

University of Osnabrück, Germany

RK
Roberta Klatzky

Carnegie Mellon University, USA

KK
Kami Koldewyn

Bangor University, UK

Marius Peelen

Radboud University, Netherlands

Seeing and thinking: Interplay between externally and internally generated neural representations

Elizabeth Tibbetts

University of Michigan, USA

Wasps know each other's faces: The development and evolution of face recognition in the Polistes