Skip to content

Philosophy of AI Presentation Series

research areas

AI
AI ethics
Artificial intelligence
Digital Ethics
Digitalization
Ethics

timeframe

2024 - 2024

Hybrid presentation series with the aim of popularizing the most recent topics of discussion in the philosophy of AI, to the use and interests of experts across all disciplines that has something to say about digital innovation.

The theme of this series of presentations is to present various debates in the philosophy of AI. The talks will be offered by experts coming from several disciplines around AI and digital innovation, and the aim of the series is generally to foster cross-pollination and interaction between different fields. This includes (not exhaustively) philosophy, engineering, information systems, informatics, law, political sciences, sociology.

The conference is funded by the DSI Ethics Community and hosted in the Digital Library Space.

Upcoming Sessions

4. Session

May 28,  5:00 – 7:30 pm, incl. Apéro

Digital Library Space, Rämistr. 69, Zürich

Speaker: Alberto Termine (IDSIA USI-SUPSI, Lugano)

Register here

Registration Philosophy of AI

Name(Required)
Name
Email
I would like to attend(Required)
I would like to attend
Any comments or questions?

Addressing Social Misattributions of Large Language Models

The talk is based on a work co-authored with Alessandro Facchini (IDSIA USI-SUPSI) and Andrea Ferrario (UZH). Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead to mismatches between designers’ intentions and users’ perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors, cases of epistemic injustice, and unwarranted trust. To address these issues, we propose enhancing the ST framework with a fifth ‘W-question’ to clarify the specific social attributions assigned to LLMs by its designers and users. This addition aims to bridge the gap between LLM capabilities and user perceptions, promoting the ethically responsible development and use of LLM-based technology.

Past Sessions

3. Session

April 9, 3 – 4:30 pm, incl. Apéro

Digital Library Space, Rämistr. 69, Zürich

Speakers: Mateusz Dolata & Dzmitry Katsiuba (IFI)

Registeration is closed. If you are interested in participating, please  contact Emanuele: emanuele.martinelli@uzh.ch

Called for Higher Standards: Drones for Public Safety.

Drones are being increasingly utilized by public authorities, including the police force. Their use offers potential improvements in efficiency and safety. Although current regulations address citizens’ concerns about airspace safety and privacy, there is a lack of guidance for authorities on how to design and utilize drones in a manner that the public views as legitimate, particularly in the context of law enforcement. Specifically, there has been minimal attention paid to the unique concerns of witnesses and bystanders who are particularly vulnerable during crime and emergency situations. Without this understanding, the police may use technology that could intimidate the very people they are sworn to protect.

In this talk, I will delve into the criteria citizens employ when assessing the use of drones by public institutions such as the police or fire departments. Initially, I will examine drone usage from the viewpoints of legal and regulatory aspects, criminology and security, as well as technology acceptance. Subsequently, I will concentrate on the specific instance of police drone usage during robbery incidents. I will investigate the considerations and apprehensions citizens have regarding drone usage by public authorities, and suggest a theoretical framework to ensure legitimate use of drones. I will wrap up by demonstrating how some of the safeguards proposed in this framework can be effectively implemented through appropriate technology design.

1. Session

March 19, 3 – 4:30 pm, incl. Apéro

Digital Library Space, Rämistr. 69, Zürich

Speaker: Jana Sedlakova

Registration is closed. If you would like to attend please contact Emanuele: emanuele.martinelli@uzh.ch

Ethics of Conversational Agents in Healthcare: The dilemma of humanization

Conversational agents are increasingly used in healthcare. The common purposes and applications include health education, intervention support, disease management and patient data collection. In my talk, I will focus on the common trend of developing conversational agents with human-like features to simulate human conversation and characteristics. I call this process humanization and will address its ethical and epistemological challenges with a particular focus on mental healthcare. In common applications of conversational agents, it is important to ensure that patients feel comfortable and safe to interact with a conversational agent, benefit from such interaction and feel comfortable to disclose truthful information about themselves. However, the humanization of conversational agents might lead to feelings of deception, forming wrong expectations and a bad interaction can cause negative feelings.

There are many open questions related to the humanization. I will address some of them:

To what extent should conversational agents be understood in human-like terms?

What are the associated risks with such humanization?

What are the common conceptual tools that might help to navigate understanding of and interaction with conversational agents?

I will also present preliminary results from an empirical study in which I designed sic different chatbot personas with different humanization levels. The aim was to explore whether different design choices impact user experience, their willingness to disclose and the quality of disclosed data.

2. Session

March 25, 6 – 7:00 pm, incl. Apéro

Place: K02-F-152 (Karl Schmid-Strasse 4 )

Panel discussion with: Roman Lipsky, Liat Grayver, moderator Atay Kozlovski

 

The Future of Art in a World with AI

The event is panel discussion about the impact that generative AI will have in the fields of creativity and artistic production, with a particular focus on positive ways for artists to adapt to AI as tools to assist the creative process. The speakers shall be Roman Lipsky (a painter who creates art in collaboration with AI systems and IBMs quantum computer) and Liat Grayver (an artist who creates art using robotics). The moderator shall be Atay Kozlovski.