How we know what not to think
A basic problem in the cognitive sciences is that there is just too much to think about. When we choose how to act, some prior process must be responsible for constructing the “choice set”—determining which options will come to mind in the first place. I will present some evidence that people use cached value representations to construct choice sets, and describe a formal model that illustrates why this approach is adaptive. I will then present several studies indicating that moral norms exert an especially strong effect at the point of choice set construction. In other words, we often avoid doing the wrong thing because we are designed to never think of it. Finally, I will describe a rational model of the punishment of negligent acts according to which punishment changes the value assigned to occurrent thoughts, thus preventing future acts of negligent harm.
The value of moral action
Classical models of antisocial behavior propose that violence arises out of a failure of lateral prefrontal cortex (LPFC) to “put the brakes” on aggressive impulses originating in subcortical regions such as the amygdala and striatum. A new, alternative model proposes that LPFC does not directly inhibit aggressive impulses, but instead flexibly modulates the value of aggressive acts via corticostriatal circuits. I will present empirical evidence from a series of behavioral, pharmacological and neuroimaging experiments directly supporting this alternative model. This mechanism implies that the moral value of actions is flexibly guided by neural representations of social norms. If norms change, so then do the values that guide actions. Supporting this view, re-framing decisions to harm others as being in service of a noble cause eliminated moral preferences. Implications for models of moral responsibility will be discussed.
Control, reliability, and morality
Moral responsibility presumably depends on a certain type of self-(causal) control. In this talk, I will provide experimental evidence that judgments of causal control depend on reliability, and so moral responsibility (or at least, our judgments of it) will depend on the extent to which people can reliably self-control. I will then examine how this reframing in terms of reliability, rather than notions such as "could have done otherwise," leads to a different picture of moral responsibility—one that explains how we can reliably self-control in the way necessary for moral responsibility.
Counterfactual thinking and perceived control
Our capacity to imagine alternative ways in which actual events could have occurred—i.e., episodic counterfactual thinking—seems to be influenced by a number of factors. One such factor is perceived control. It is said that people tend to mentally mutate events they perceive to be under their control, relative to events that seem uncontrollable. In this talk I want to re-evaluate this claim and raise some issues that may have consequences for research in philosophy and clinical psychology.
Weighing Moral Actions: How learning and responsibility shape moral trade-offs
How do humans make choices when there are competing pressures of fairness, harm, self-interest, and concern for others? While most of us are motivated to uphold social norms such as fairness and equity, we also seek to maximize our own self-benefit. I will describe a set of studies that explore these moral trade-offs, focusing on 1) characterizing how individuals learn to value certain moral behaviors over others, and 2) identifying how different contexts can shift perceptions of moral responsibility.
Addiction and the role of the self in self-control
Self-control is often conceived of in terms of the capacity to delay gratification - to prefer larger later rewards over smaller sooner rewards. For agents to be motivated by future rewards they must have more than a third-personal appreciation of the value and the consequences of various options. Self-control requires the capacity for mental time travel and identification with the person at the other end of those travels. I suggest that the second of these is often lacking in people with addiction.
An opportunity cost model of self-control and effort
Exerting ‘effort’ or ‘self-control’ is experienced as aversive. From an evolutionary point of view, this is something of a mystery insofar as aversive phenomenology is usually associated with fitness costs or threats, whereas exerting self control seems to be associated with positive outcomes. A leading explanation among psychologists is that there is a resource – the fuel for willpower – that is depleted over time. As this resource is consumed, it becomes phenomenologically harder and harder to resist temptation. However, a growing body of evidence suggests that, as compelling as this explanation might be, there are very good reasons to doubt it. Instead, the phenomenology of effort and the associated sensation of fatigue might more productively be thought of in the context of motivation. The reason that it is difficult to persist on effortful tasks might be due to computations surrounding the possibility of switching to another task that is likely to be psychologically rewarding. That is, engaging in effortful tasks carries the opportunity cost of not engaging in a more pleasurable task. These costs lead to the aversive experiences of effort and fatigue.
The responsibility of bounded agents
We consider the ethical responsibility of computationally-bounded agents. The immediate goal is to identify specific problems that arise in creating normative theories of ethical behavior for bounded agents characterized using theoretical constructs from two computational frameworks in artificial intelligence: reinforcement learning and bounded optimality. The long term goal is a normative framework for ethical decision making among interacting agents that is parameterized for agents' computational architecture types---that is, a moral code for computationally-rational agents. Such a general framework could be useful for both grappling with intuitions concerning the responsibility of humans with impaired cognitive control, and more generally, for understanding a moral landscape populated by biological and artificial agents of widely varying computational capacities.
Responsibility, diachronicity and the structure of the will
Frankfurt's highly influential view of free will proposed a hierarchically structured will and offered a compatibilist theory of freedom that depends on structural features of the will. His picture is intuitively attractive, but the view fails to account for the importance of the dynamics of the will and of the importance of temporality for theories of responsibility. I propose that a dynamical theory of the will can provide insights into elements crucial for responsibility, such as self-control and self-formation.
What are you doing when you are controlling yourself?
What is self-control? To a scientist, this question amounts to: what is the mechanism that explains this phenomenon? This way of thinking about self-control goes along with thinking about human agents as objects of study. What makes it the case that some people exhibit self-control whereas others do not? Can we identify a breakdown in the mechanism that explains this difference?
I believe the deepest philosophical question about self-control is fundamentally different from these questions. It arises from the perspective of one who is undertaking to control herself. From this standpoint, the question is not, what is happening when I (attempt to) control myself? Rather it is, what am I taking responsibility for, insofar as I am taking responsibility for controlling myself? This question is not about the nature of self-control, regarded as a scientific phenomenon; it is a question about the nature of the task we all face, the job we find we must do. In my talk, I will try to show you why you probably don’t have a clear answer to the philosophical question, and why answering it is genuinely difficult.
Weighing the costs and benefits of control
Cognitive control is known to be effortful, yet little is known about how we allocate effort. I will describe recent theoretical and empirical work aimed at understanding this process through the lens of value-based decision-making. On this view, individuals choose how much and what kind of control to allocate according to the predicted costs and benefits associated with increased effort. These combine to determine the Expected Value of Control (EVC). The EVC framework accounts for interactions between incentives, cognitive performance, and task choice observed in behavioral performance. This work provides a path towards understanding why we may not always choose to make the effort demanded by our academic, work, or social environment, and how variability in these circuits will lead to maladaptive allocation of cognitive control in particular clinical populations.
Error and limits of control
Humans have remarkably developed capacities for cognitive control, but this raises a puzzle: Why do we so often fail to exert control over self-destructive desires? One widely accepted kind of explanation is that the person chooses to abandon control because of the aversiveness associated with their desires and the relief obtained when giving in (trichotillomania provides a good example). It is more controversial, however, if there are genuine limits on control such that a person is, in some interesting sense, genuinely powerless to prevent giving in. In this talk, I propose one model that can make sense of such limits. Exerting cognitive control is, I argue, susceptible to rare errors. Even if the point probability of such errors is low, so long as the person faces sufficiently many recurrent desires, the cumulative probability of a self-control lapse rises inexorably towards certainty. I apply this “fallibility” model to the case of addiction.
Cognitive effort and moral action
I discuss recent models of cognitive control resource allocation that emphasize a role for the experience of cognitive effort. I suggest that the phenomenology of cognitive control is more complex than these models thus far allow, and that this fact should influence the experiments we perform and the models we construct. I then discuss how these issues are relevant to moral reflection on responsibility for action.