Towards IMACA: Intelligent multimodal affective conversational agent

Amir Hussain*, Erik Cambria, Thomas Mazzocco, Marco Grassi, Qiu Feng Wang, Tariq Durrani

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

2 Citations (Scopus)

Abstract

A key aspect when trying to achieve natural interaction in machines is multimodality. Besides verbal communication, in fact, humans interact also through many other channels, e.g., facial expressions, gestures, eye contact, posture, and voice tone. Such channels convey not only semantics, but also emotional cues that are essential for interpreting the message transmitted. The importance of the affective information and the capability of properly managing it, in fact, has been more and more understood as fundamental for the development of a new generation of emotion-aware applications for several scenarios like e-learning, e-health, and human-computer interaction. To this end, this work investigates the adoption of different paradigms in the fields of text, vocal, and video analysis, in order to lay the basis for the development of an intelligent multimodal affective conversational agent.

Original languageEnglish
Title of host publicationNeural Information Processing - 19th International Conference, ICONIP 2012, Proceedings
Pages656-663
Number of pages8
EditionPART 1
DOIs
Publication statusPublished - 2012
Externally publishedYes
Event19th International Conference on Neural Information Processing, ICONIP 2012 - Doha, Qatar
Duration: 12 Nov 201215 Nov 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume7663 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference19th International Conference on Neural Information Processing, ICONIP 2012
Country/TerritoryQatar
CityDoha
Period12/11/1215/11/12

Keywords

  • AI
  • HCI
  • Multimodal Sentiment Analysis

Fingerprint

Dive into the research topics of 'Towards IMACA: Intelligent multimodal affective conversational agent'. Together they form a unique fingerprint.

Cite this