The natural environment of human communication is in face-to-face interaction. It is a multimodal phenomenon, as speech is accompanied by visual bodily signals such as gestures, facial expressions, and eye gaze across all cultures and ages. Moreover, in order to have a successful conversation, it is essential to recognize what social action (or ‘speech act’) utterances are performing (e.g., is it an offer, a question, a compliment). The goal of my project is to explore visual signals that contribute to social action attribution in face-to-face interaction and the extent to which they form multimodal ‘gestalts’ (i.e., stable combinations with other signals) by using behavioural and neurocognitive measures with VR.
This project is part of the ERC Consolidator grant “Communication in Action (CoAct)” awarded to dr. Judith Holler.