TRR 318 - Subproject B6 - Ethics and normativity of explainable AI

Overview

Research on explaining and explainability needs ethical reflection, because explanations can be used to manipulate users or to create acceptance for a technology that is ethically or legally unacceptable. Moreover, designing explainability to meet ethical demands (e.g., justifiability, accountability, autonomy) must not necessarily be in line with meeting users’ interests. The ethical reflection proposed in our project encompasses three intertwined lines of investigation: First, we shall systematically classify the different purposes, needs, and requirements of explanations. Second, our project will also reflect ethically on technological development within TRR 318. Third, we shall combine both lines of research to pursue both a theoretical and a practical goal: On the one hand, we want to extend the TRR 318’s model of explaining as a social practice into an ethical framework of explaining AI. On the other hand, we shall apply this framework to concrete projects within TRR 318 to (a) explicate how current design choices reflect ethical considerations and users’ demands by following the methodological steps of value sensitive design (VSD). From these insights, we shall then (b) formulate concrete design recommendations to inform further development within TRR 318. In the long run, B06 will also enhance the TRR’s consideration of the social contexts, because it identifies issues that cannot be fixed technically but need to be addressed on a social or legal level.

Key Facts

Grant Number:
438445824
Project duration:
07/2023 - 12/2025
Funded by:
DFG
Website:
Homepage

More Information

Principal Investigators

contact-box image

Prof. Dr. Tobias Matzner

Transregional Collaborative Research Centre 318

About the person
contact-box image

Jun. Prof. Dr. Suzana Alpsancar

Applied ethics with a focus on technology ethics in the digital world

About the person

Publications

Warum und wozu erkl?rbare KI? ?ber die Verschiedenheit dreier paradigmatischer Zwecksetzungen
S. Alpsancar, in: R. Adolphi, S. Alpsancar, S. Hahn, M. Kettner (Eds.), Philosophische Digitalisierungsforschung? Verantwortung, Verst?ndigung, Vernunft, Macht, transcript, Bielefeld, 2024, pp. 55–113.
AI explainability, temporality, and civic virtue
W. Reijers, T. Matzner, S. Alpsancar, M. Philippi, in: Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024., Longrono, 2024.
Explanation needs and ethical demands: unpacking the instrumental value of XAI
S. Alpsancar, H.M. Buhl, T. Matzner, I. Scharlau, AI and Ethics (2024).
Unpacking the purposes of explainable AI
S. Alpsancar, T. Matzner, M. Philippi, in: Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT, Universidad de La Rioja, 2024, pp. 31–35.
What is AI Ethics? Ethics as means of self-regulation and the need for critical reflection
S. Alpsancar, in: International Conference on Computer Ethics 2023, 2023, pp. 1--17.
Show all publications