Skip to content

Ethical and epistemic challenges of humanization of healthcare conversational agents

Researchers Involved

Jana Sedlakova

Dr. Marcia Nissen

Prof. Dr. med. Dr. phil. Nikola Biller-Andorno

MD, PhD Manuel Trachsel

Prof. Viktor von Wyl

research areas

AI ethics
Digital Ethics
Digital support
Human-computer interaction
Intelligent machines
Technology & Society

timeframe

2020 - 2024

Conversational agents such as chatbots are increasingly used in healthcare. They simulate human interaction and are developed with strong human-like features among others to form (therapeutic) relationships with users. However, there is little known about how these chatbots should interact with users and what norms and standards should guide such interaction and relationship.

Background

The potential of conversational agents lies in increasing patients’ engagement and access to healthcare, particularly for underserved groups (e.g., patients with fear of stigmatization, living in rural areas). The quality of healthcare can be improved through patients’ empowerment, personalization and identification of health problems in the early stages due to digital phenotyping. Furthermore, the adoption of conversational agents can automatize healthcare processes, subsequently creating efficiencies and lessening physicians’ burden. The development of conversational agents aims at simulating humans and their interaction. Even though human interaction is simulated and tested in conversational agents, such simulations may not have the same effects in terms of both users’ experience and the quality of gathered data in the context of patient-reported outcomes. Furthermore, there is a lack of norms and standards guiding such interactions.

Empirical Study

Humanization of Healthcare Chatbots

Aim

With this study, we aim to investigate the impact of different design choices concerning humanization of chatbots that are utilized to ask sensitive questions related to mental health. Thereby, we will develop 6 chatbot personas consisting of three social roles and two different levels of humanization – mild and strong. We seek to understand how these design variations affect users’ overall experience, their willingness to disclose information, and the quality of the data obtained.

Are you interested?

Participate here:

https://www.soscisurvey.de/mentalhealthcarechatbots/

 

1. Subproject: Normative and theoretical analysis

Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?

The project started with an initial analysis of conversational agents on the spectrum between a tool and an agent. The analysis focused on the concepts of agency, self-knowledge and normative requirements.

For more information:

Jana Sedlakova & Manuel Trachsel (2022) Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?, The American Journal of Bioethics, DOI: 10.1080/15265161.2022.2048739

2. Subproject: Empirical Research

In this project, we aim to develop 6 different personas of chatbots. The chatbots will have 3 different social roles and each social role two different levels of humanization.

Main research question:

How are users’ experience, information disclosure and data quality impacted by the different levels of human-like features and social roles of chatbots?

Timeline:

February 2023 – April 2024

3. Subproject: Ethics of Humanization

The simulation of human-like features, human interaction and human relationships need ethics of humanization that will guide this emerging space. During my research stay at the Ethox Centre of the University of Oxford, I am focusing on analyzing normative questions about epistemic trust, epistemic authority and epistemic injustice regarding conversational agents used in mental healthcare.

Ethical & Normative Questions

  • What standards and norms should guide interaction and relationship with conversational agents?
  • How conversational agents should interact with users and what status they should have?
  • When does a simulation of human-like features, human interaction and human relationships make sense?
  • What is the value of such simulation?
  • How to evaluate such simulation?