Date of Award

5-14-2023

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Philosophy

Advisor(s)

Ben Bradley

Keywords

artificial intelligence, blameworthiness, degrees of causation, praiseworthiness, responsibility, self-driving cars

Abstract

I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI.

Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible for what AI does. That is, the so-called responsibility gap exits. However, this isn’t morally worrisome for developing or using AI. Firstly, I argue, current AI doesn’t generate a new kind of concern about responsibility that the older technologies don’t. Then, I argue that responsibility gap is not worrisome because neither responsibility gap, nor my argument for its existence, entails that no one can be justly punished, held accountable, or incurs duties in reparations when AI causes a harm.

Access

Open Access

Share

COinS