Michigan, Psychology & Linguistics

The responsibility of bounded agents

We consider the ethical responsibility of computationally-bounded agents. The immediate goal is to identify specific problems that arise in creating normative theories of ethical behavior for bounded agents characterized using theoretical constructs from two computational frameworks in artificial intelligence: reinforcement learning and bounded optimality. The long term goal is a normative framework for ethical decision making among interacting agents that is parameterized for agents' computational architecture types---that is, a moral code for computationally-rational agents. Such a general framework could be useful for both grappling with intuitions concerning the responsibility of humans with impaired cognitive control, and more generally, for understanding a moral landscape populated by biological and artificial agents of widely varying computational capacities.