Speakers
Asifa Majid / Oxford University
Cross-linguistic studies show substantial differences in how languages package meaning into words and grammar. Notably, this is also true in the domain of perception. A large-scale study of 20 diverse cultures has shown that even simple sensory experiences of colours, smells, and tactile textures are expressed differently across languages. This linguistic variation raises the question of whether the underlying cognition of people is also variable across cultures or whether diverse languages interface with a universal bedrock of cognition instead. Recent data suggests the answer may vary across domains, such that some aspects of cognition are more malleable to language effects than others. I will show how language shapes some aspects of auditory cognition, while leaving aspects of olfactory perception untouched, for example, and explore why these differences may exist.
Judith Burkart / University of Zurich
During joint action, adults often co-represent the partner's task and actions. To investigate the evolutionary and ontogenetic origin of co-representation, we used the joint Simon task and compared 4 primate species: marmosets monkeys, who like humans are cooperative breeders and therefore systematically cooperate during their everyday life, capuchin monkeys and Tonkean macaques who are less cooperative, and human children (2-5 years). We found i) corepresentation in all primates, including 2 year old humans, revealing that high levels of inhibitory control or Theory of Mind are not a prerequisite for it; ii) that more cooperative primates (i.e. children and marmosets) were better at preventing that co-representation would negatively affect their cooperation success; and iii) that only marmosets and toddlers relied on mutual gaze to solve the conflict between self-other integration and distinction. I will end by highlighting implications of these results for the evolution of corepresentation and human hypercooperation.
Leyla Isik / Johns Hopkins University
Humans perceive the world in rich social detail. We effortlessly recognize not only objects and faces in our environment, but also other peoples' social interactions. The ability to perceive others' social interactions is critical for social trait judgement and ultimately guides how humans act in the social world. We recently identified a region that selectively represents others' social interactions in the posterior superior temporal sulcus (pSTS) using controlled experiments with simple stimuli. However, it is unclear how social interactions are processed in the real world where they co-vary with many other sensory and social features. In this talk I will discuss new work using naturalistic video paradigms and novel machine learning analyses to understand how humans process social interactions in natural settings. Finally, I will discuss the computational implications of humans' social interaction selectivity and how we can develop artificial systems that share this core human ability.
Majid Beni / METU
A cognitive approach Scientific activity is model-based, meaning that scientists use (mathematical, physical, phenomenological, etc.,) models to represent the features of their target systems. There is a philosophical question: what grounds do we have to assume that scientific models truthfully/veridically represent the structure of the world. The talk offers a possible reply by developing a cognitive account of scientific model-making under the rubric of the Free Energy Principle (FEP), which lies at the centre of a flourishing research program in computational neuroscience.