Object recognition and conceptual information about objects
Independence and interdependency
Our sensory systems are constantly stimulated by the surrounding environment. At any given moment, a visually perceived scene triggers a series of cascading processes that analyze the incoming input. Many different types of information are processed in parallel and, after a couple hundred milliseconds, become available for scrutiny. This (seemingly) informational overload has to be address by the cognitive system in the process of performing its high-level cognitive tasks. A central question in the cognitive sciences, then, concerns the role of these different types of information on object recognition and their status as part of the conceptual representation of a particular object. How do these different types of information feature in object recognition? And how are they integrated, if at all, in the conceptual representation of the visual inspected object? Here we addressed these questions by manipulating different kinds of object-related information, and the time allotted to their processing, in the context of experimental procedures that tap into object recognition processes. We primarily used the category of manipulable objects and the knowledge types associated with this domain of objects in our experimental procedures. We did it so because these items afford a large number of different types of information, because the neural underpinnings of the processing of these items have been widely studied and are relatively well known, and because these neural substrates point to specific anatomical regions/knowledge type interactions.
