Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010338260 | QA76.9.H85 C68 2014 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Embodied conversational agents (ECA) and speech-based human-machine interfaces can together represent more advanced and more natural human-machine interaction. Fusion of both topics is a challenging agenda in research and production spheres. The important goal of human-machine interfaces is to provide content or functionality in the form of a dialog resembling face-to-face conversations. All natural interfaces strive to exploit and use different communication strategies that provide additional meaning to the content, whether they are human-machine interfaces for controlling an application or different ECA-based human-machine interfaces directly simulating face-to-face conversation.
Coverbal Synchrony in Human-Machine Interaction presents state-of-the-art concepts of advanced environment-independent multimodal human-machine interfaces that can be used in different contexts, ranging from simple multimodal web-browsers (for example, multimodal content reader) to more complex multimodal human-machine interfaces for ambient intelligent environments (such as supportive environments for elderly and agent-guided household environments). They can also be used in different computing environments--from pervasive computing to desktop environments. Within these concepts, the contributors discuss several communication strategies, used to provide different aspects of human-machine interaction.
Author Notes
Matej Rojc, Nick Campbell
Table of Contents
Preface | p. v |
List of Contributors | p. xi |
1 Speech Technology and Conversational Activity in Human-Machine Interaction | p. 1 |
2 A Framework for Studying Human Multimodal Communication | p. 17 |
3 Giving Computers Personality? Personality in Computers is in the Eye of the User | p. 41 |
4 Multi-Modal Classifier-Fusion for the Recognition of Emotions | p. 73 |
5 A Framework for Emotions and Dispositions in Man-Companion Interaction | p. 99 |
6 French Face-to-Face Interaction: Repetition as a Multimodal Resource | p. 141 |
7 The Situated Multimodal Facets of Human Communication | p. 173 |
8 From Annotation to Multimodal Behavior | p. 203 |
9 Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation | p. 223 |
10 A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents | p. 243 |
11 Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis | p. 269 |
12 A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking | p. 293 |
13 TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents | p. 325 |
14 Modeling Human Communication Dynamics for Virtual Human | p. 361 |
15 Multimodal Fusion in Human-Agent Dialogue | p. 387 |
Index | p. 411 |
Color Plate Section | p. 415 |