Ethical and normative issues of humanization of healthcare conversational agents
Researchers Involved
Prof. Dr. med. Dr. phil. Nikola Biller-Andorno
MD, PhD Manuel Trachsel
Prof. Viktor von Wyl
research areas
timeframe
2020 - 2024
contact
jana.sedlakova@ibme.uzh.chConversational agenst such as chatbots are increasingly used in health care. They simulate human interaction and are developed with strong human-like features among others to form a (therapeutic) relationships with users. However, there is little known how these chatbots should interact with users and what norms and standards should guide such interaction and relationship.
Background
The potential of conversatoinal agents lies in increasing patients’ engagement and access to health care, particularly for under-deserved groups (e.g., patients with fear of stigmatization, living in rural areas). The quality of health care can be improved through patients’ empowerment, personalization and identification of health problems in the early stages due to digital phenotyping. Furthermore, the adoption of conversational agents can automatize healthcare processes, subsequently creating efficiencies and lessening physicians’ burden. The development of conversational agents aims at simulating humans and their interaction. Even though the human interaction is simulated and tested in conversational agents, such simulations may not have the same effects in terms of both users’ experience and the quality of gathered data in the context of patient-reported outcomes. Furthermore, norms and standards guiding such interactions are unclear and undefined.
<html>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, height=device-height, initial-scale=1.0, maximum-scale=1.0, user-scalable=no”>
<title>Landbot | Convert a Landing Page into a Chatbot</title>
</head>
<body>
<script SameSite=”None; Secure” src=”https://cdn.landbot.io/landbot-3/landbot-3.0.0.js”></script>
<script>
var myLandbot = new Landbot.Fullpage({
configUrl: ‘https://storage.googleapis.com/landbot.online/v3/H-1575917-1F5D1GXRIAFG7EPX/index.json’,
});
</script>
</body>
</html>
Ethical & Normative Questions
- What standards and norms should guide interaction and relationship with conversational agents?
- How conversational agents should interact with users and what status they should have?
- When does a simulation of human-like features, human interaction and human relationships make sense?
- What is the value of such simulation?
- How to evaluate such simulation?
1. Subproject: Normative and theoretical analysis
Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?
The project started with an initial analysis of conversational agents on the spectrum between a tool and an agent. The analysis focused on the concepts of agency, self-knowledge and normative requirements.
For more information:
(2022) Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?, The American Journal of Bioethics, DOI: 10.1080/15265161.2022.2048739
2. Subproject: Empirical Research
In this project, we aim to develop 6 different personas of chatbots. The chatbots will have 3 different social roles and each social role two different levels of humanization.
Main research question:
How are users’ experience, information disclosure and data quality impacted by the different levels of human-like features and social roles of chatbots?
Timeline:
February – October 2023
3. Subproject: Development of a Normative Framework
The project will be finalized by developing a normative framework to evaluate and guide the simulation of human-like features, human interaction and human relationships. In this final part, we will synthesize results from subproject 1 and 2.