Cover image for Coverbal synchrony in human-machine interaction
Title:
Coverbal synchrony in human-machine interaction
Publication Information:
Boca Raton, FL. : CRC Press, Taylor & Francis Group, 2014
Physical Description:
xiv, 420 pages : illustrations (some color) ; 24 cm
ISBN:
9781466598256

Available:*

Library
Item Barcode
Call Number
Material Type
Item Category 1
Status
Searching...
30000010338260 QA76.9.H85 C68 2014 Open Access Book Book
Searching...

On Order

Summary

Summary

Embodied conversational agents (ECA) and speech-based human-machine interfaces can together represent more advanced and more natural human-machine interaction. Fusion of both topics is a challenging agenda in research and production spheres. The important goal of human-machine interfaces is to provide content or functionality in the form of a dialog resembling face-to-face conversations. All natural interfaces strive to exploit and use different communication strategies that provide additional meaning to the content, whether they are human-machine interfaces for controlling an application or different ECA-based human-machine interfaces directly simulating face-to-face conversation.

Coverbal Synchrony in Human-Machine Interaction presents state-of-the-art concepts of advanced environment-independent multimodal human-machine interfaces that can be used in different contexts, ranging from simple multimodal web-browsers (for example, multimodal content reader) to more complex multimodal human-machine interfaces for ambient intelligent environments (such as supportive environments for elderly and agent-guided household environments). They can also be used in different computing environments--from pervasive computing to desktop environments. Within these concepts, the contributors discuss several communication strategies, used to provide different aspects of human-machine interaction.


Author Notes

Matej Rojc, Nick Campbell


Table of Contents

Nick CampbellJens AllwoodJörg Frommer and Dietmar Rösner and Julia Lange and Matthias HaaseMartin Schels and Michael Glodek and Sascha Meudt and Stefan Scherer and Miriam Schmidt and Georg Layher and Stephan Tschechne and Tobias Brosch and David Hrabal and Steffen Walter and Harold C. Traue and Günther Palm and Heiko Neumann and Friedhelm SchwenkerHarald C. Traue and Frank Ohl and André Brechmann and Friedhelm Schwenker and Henrik Kessler and Kerstin Limbrecht and Holger Hoffmann and Stefan Scherer and Michael Kotzyba and Andreas Scheck and Steffen WalterRoxane Bertrand and Gaëlle Ferré and Mathilde GuardiolaAnna EspositoKristiina Jokinen and Catherine PelachaudKirsten BergmannElisabetta BevacquaRadoslaw Niewiadomski and Maurizio Mancini and Stefano PianaGudny Ragna Jonsdottir and Kristinn R. ThórissonIzidor Mlakar and Zdravko Kacic and Matej RojcLouis-Philippe Morency and Ari Shapiro and Stacy MarsellaElisabeth André and Jean-Claude Martin and Florian Lingenfelser and Johannes Wagner
Prefacep. v
List of Contributorsp. xi
1 Speech Technology and Conversational Activity in Human-Machine Interactionp. 1
2 A Framework for Studying Human Multimodal Communicationp. 17
3 Giving Computers Personality? Personality in Computers is in the Eye of the Userp. 41
4 Multi-Modal Classifier-Fusion for the Recognition of Emotionsp. 73
5 A Framework for Emotions and Dispositions in Man-Companion Interactionp. 99
6 French Face-to-Face Interaction: Repetition as a Multimodal Resourcep. 141
7 The Situated Multimodal Facets of Human Communicationp. 173
8 From Annotation to Multimodal Behaviorp. 203
9 Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluationp. 223
10 A Survey of Listener Behavior and Listener Models for Embodied Conversational Agentsp. 243
11 Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesisp. 269
12 A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-takingp. 293
13 TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agentsp. 325
14 Modeling Human Communication Dynamics for Virtual Humanp. 361
15 Multimodal Fusion in Human-Agent Dialoguep. 387
Indexp. 411
Color Plate Sectionp. 415