TRR 318 - Monitoring the understanding of explanations (Subproject A02)

Overview

When something is being explained to someone, the explainee signals their understanding – or lack thereof – to the explainer with verbal expressions and other non-verbal means of communication, such as gestures and facial expressions. By nodding, the explainee can signal that they have understood. Nodding, however, can also be meant as a request to continue with the explanation. This has to be determined from the context of the conversation. In Project A02, linguists and computational linguists are investigating how people (and later, artificial agents) recognize that the person they’re explaining something to is understanding – or not. For this, the research team will be looking at 80 dialogues in which one person explains a social game to another, examining these for communicative feedback signals that indicate varying degrees of comprehension in the process of understanding. The findings from these analyses will be incorporated into an intelligent system that will be able to detect feedback signals such as head nods and interpret them in terms of signaled level of understanding.

Key Facts

Project type:
Research
Project duration:
07/2021 - 06/2025
Funded by:
DFG
Website:
Homepage

More Information

Principal Investigators

contact-box image

Dr. Angela Grimminger

Germanistische und Allgemeine Sprachwissenschaft

About the person
contact-box image

Hendrik Buschmeier

Universit?t Bielefeld

About the person (Orcid.org)
contact-box image

Petra Wagner

Universit?t Bielefeld

About the person (Orcid.org)

Project Team

contact-box image

Stefan Lazarov, M.A.

Transregional Collaborative Research Centre 318

About the person
contact-box image

Olcay Türk

Universit?t Bielefeld

About the person (Orcid.org)
contact-box image

Yu Wang

Universit?t Bielefeld

Cooperating Institutions

Universit?t Bielefeld

Cooperating Institution

Publications

Towards a Computational Architecture for Co-Constructive Explainable Systems
M. Booshehri, H. Buschmeier, P. Cimiano, S. Kopp, J. Kornowicz, O. Lammert, M. Matarese, D. Mindlin, A.S. Robrecht, A.-L. Vollmer, P. Wagner, B. Wrede, in: Proceedings of the 2024 365足彩投注_365体育投注@ on Explainability Engineering, ACM, 2024, pp. 20–25.
Predictability of understanding in explanatory interactions based on multimodal cues
O. Türk, S. Lazarov, Y. Wang, H. Buschmeier, A. Grimminger, P. Wagner, in: Proceedings of the 26th ACM International Conference on Multimodal Interaction, San José, Costa Rica, n.d.
How much does nonverbal communication conform to entropy rate constancy?: A case study on listener gaze in interaction
Y. Wang, Y. Xu, G. Skantze, H. Buschmeier, in: Findings of the Association for Computational Linguistics ACL 2024, Bangkok, Thailand, 2024, pp. 3533–3545.
A model of factors contributing to the success of dialogical explanations
M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 26th ACM International Conference on Multimodal Interaction, ACM, San José, Costa Rica, n.d.
Towards a BFO-based ontology of understanding in explanatory interactions
M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 4th International 365足彩投注_365体育投注@ on Data Meets Applied Ontologies in Explainable AI (DAO-XAI), Santiago de Compostela, Spain, n.d.
Show all publications