Laura Crompton, BA MA

Laura Crompton, BA MA

Præ Doc


Laura Caroline Crompton
Department of Philosophy
University of Vienna
Universitätsstraße 7 (NIG)
1010 Vienna

Room: C 0211 (NIG)
Phone: +43-1-4277-46073
Mail: laura.crompton@univie.ac.at
Web: https://univie.academia.edu/LauraCrompton


 

Areas of Specialization

Ethics of AI, Robot Ethics, Philosophy of Technology

 

FoNTI-Project

My research centers around the ethical concerns and social implications that arise through human-AI interaction.
In my PhD project I analyse and evaluate the unintended influence AI can have on its human users. More specifically, I concentrate on AI that is implemented to support human decisions. Such AI can reach from online shopping and online booking, over health applications, to risk assessment in jurisprudence, or child care in social work. With introducing what I call the objectivity-fallacy, I argue that we need to differentiate between two kinds of AI as decision support. And these two kinds of AI as decision support can be found to have an influence on their human users. What this influence looks like, largely depends on the kind of AI as decision support one is confronted with. While one is set out to actively influence its human users, the influence of the other is more of an unintended by-product of the respective human-AI interaction.

This thesis mainly concentrates on unintended AI influence and the ethical implications that come alongside with it.

The main claim I aim to make, is that unintended AI influence renders the way we usually characterise human-AI interaction as fundamentally flawed. And this has important implications on how we usually ascribe responsibility in human-AI interaction. To give these claims the needed theoretical grounds, I introduce the notion of decision points. While human-AI interaction is supposed to be characterised by a human decision point, unintended AI influence results in a human-AI decision point. In not being able to determine a human decision point, we cannot hold the acting human agent responsible for the respective action. The introduction of an Extended Agency Theory is to help address this challenge.

Supervisors: Hans Bernhard Schmid, Mark Coeckelbergh

 


Upcoming Talks

tba

 

Past Talks

15.05.2019 (15:00-16:30) // Guest Lecture - Media, Technology, and Romanticism: “Romanticism with the machine: Mechanical Romanticism”. Vienna University // HS 31 (Hauptgebäude).

25.05.2019 // Talk - Swiss Embassy - Towards an Inclusive Future in AI: "Ethics of AI" // Swiss Embassy Vienna.

11.07.2019 // Talk - European Data Protection Board Conference - Can we trust our artificial eyes?: "Human Enhancement, Privacy and Ethics" // EDPS Brussels.

07.01.2020 // Guest Lecture - Introduction to Philosophy of Technology & Media: “Analytic Philosophy of Technology”. Vienna University // HS III (NIG)

20.08.2020 // Talk - Trust in Robots and AI Workshop: "A Critical Analysis of the Trust Human Agents have in Computational and Embodied AI" // Robophilosophy Conference 2020 (online)

01.10.2020 // Talk - Rechtlich - ethische Fragestellungen beim autonomen Fahren Workshop: "Ethik des Autonomen Fahrens" // TechMeetsLegal

17.12.2020 // Talk - Ethics and Technology Seminar Series: "An Analysis and Evaluation of the Influence AI has on Human Agents" // MCTS (TU Munich) (online)

27.09.2021 // Talk - "The problem of AI influence" // PT-AI Conference Gothenburg

02.12.2021 // Panelist - "The artificial intelligence act - is Big Brother still watching you?" // Department of European, International and Comparative Law, University of Vienna

 

Teaching

TBA

 

Publications

Laura Crompton. "A Critical Analysis of the Trust Human Agents have in Computational and Embodied AI". In: Nørskov, M., Seibt J., Quick O. 2020. Culturally Sustainable Social Robotics—Proceedings of Robophilosophy 2020. Series Frontiers of AI and Its Applications, IOS Press, Amsterdam.

Laura Crompton. "The decision-point-dilemma: yet another problem of responsibility in human-AI interaction". In: Journal of Responsible Technology. Sep 2021. doi:10.1016/j.jrt.2021.100013.

Robert Woitsch, Wilfrid Utz, Anna Sumereder, Bernhard Dieber, Benjamin Breiling, Laura Crompton, Michael Funk, Karin Bruckmüller, Stefan Schumann. "Collaborative Model-Based Process Assessment for Trustworthy AI in Robotic Platforms". In: Communications in Computer and Information Science. Springer International Publishing, 2021. doi: 10.1007/978-3-030-86761-414.

 

 

Other

Guest comment in the Austrian Newspaper Der Standard on the problematic dynamics of Big Tech and Tech Regulation: https://www.derstandard.at/story/2000122287859/der-gesetzgeber-als-marionette-der-tech-konzerne

Interview in Austrian National Radio Ö1 - Tonspuren. Title: "Solaris" von Stanislaw Lem. Online: oe1.orf.at/programm/20210907/649877/Solaris-von-Stanislaw-Lem