Inter-ACT: An Affective and Contextually Rich Multimodal Video Corpus for Studying Interaction with Robots

TitleInter-ACT: An Affective and Contextually Rich Multimodal Video Corpus for Studying Interaction with Robots
Publication TypeConference Paper
Year of Publication2010
AuthorsCastellano G, Leite I, Pereira A, Martinho C, Paiva A, McOwan PW
Refereed DesignationRefereed
Conference NameACM International Conference on Multimedia
PublisherACM
Conference LocationFlorence, Italy
Keywordsaffect recognition, application-dependent design, context awareness, human-robot interaction, lirec, multimodal video corpus
AbstractThe Inter-ACT (INTEracting with Robots - Affect Context Task) corpus is an affective and contextually rich multimodal video corpus containing affective expressions of children playing chess with an iCat robot. It contains videos that capture the interaction from different perspectives and includes synchronised contextual information about the game and the behaviour displayed by the robot. The Inter-ACT corpus is mainly intended to be a comprehensive repository of naturalistic and contextualised, task-dependent data for the training and evaluation of an affect recognition system in an educational game scenario. The richness of contextual data that captures the whole human-robot interaction cycle, together with the fact that the corpus was collected in the same interaction scenario of the target application, make the Inter-ACT corpus unique in its genre.
URLhttp://dl.lirec.org/papers/CastellanoEtAl_ACMMultimedia_2010.pdf
Posted by Ginevra on Wednesday, 16 February, 2011 /

Photos

www.flickr.com