Home WakeSpace Scholarship › Electronic Theses and Dissertations

VALENCE-PARTITIONED CONTROL OF NEURAL SYSTEMS IN HUMAN CHOICE AND AFFECT DYNAMICS

Electronic Theses and Dissertations

Item Files

Item Details

title
VALENCE-PARTITIONED CONTROL OF NEURAL SYSTEMS IN HUMAN CHOICE AND AFFECT DYNAMICS
author
Sands, Lester Paul
abstract
How the human brain learns to predict rewards and punishments that follow from our actions is central to how we make decisions, how we respond emotionally to our experiences, and how mental disorders like depression or substance use manifest in our everyday lives. The experiments in the chapters of this dissertation address open questions regarding the representation of valence information in human brains, the computational roles of dopamine neurons in communicating errors in predicted rewards and punishments, and the influence of adaptive learning processes on dynamic affective experiences in humans. Specifically, we directly compare different computational theories of human learning, namely the traditional temporal difference (TD) learning framework of reinforcement learning (RL) theory and an alternative framework called valence-partitioned reinforcement learning (VPRL). The comparisons between TDRL and VPRL range from the computational to the affective, with Chapters 2 and 3 involving fitting both TDRL and VPRL models to human choice behaviors during a probabilistic reward and punishment learning task and assessing the influence of model-derived prediction error signals on self-reported subjective feelings. We demonstrate that, across the board, VPRL provides a better account of human choice behaviors and produces latent learning signals that are predictive of human affective responses. We further relate these computational, behavioral, and affective insights to underlying neural activity using fMRI (Chapter 2) and human voltammetry recordings (Chapter 3) of sub-second dopamine fluctuations. To this end, we demonstrate that phasic dopamine transients in the human caudate encode VPRL-based reward and punishment prediction error signals, and that a distributed network of cortical, striatal, and limbic brain regions process these VPRL learning signals and integrate them to track subjective feelings. In all, we provide a new theoretical account of human valence-processing during reinforcement learning and validate this account empirically, laying the foundation for future experiments in computational psychiatry.
subject
Affective decision-making
Computational modeling
Consciousness
Neuroimaging
Reinforcement learning
Voltammetry
contributor
Kishida, Kenneth T (committee chair)
Waugh, Christian (committee member)
Godwin, Dwayne (committee member)
Laurienti, Paul (committee member)
Montague, Read (committee member)
date
2022-07-11T19:17:43Z (accessioned)
2023-05-23T08:30:12Z (available)
2022 (issued)
degree
Physiology and Pharmacology (discipline)
embargo
2023-05-23 (terms)
identifier
http://hdl.handle.net/10339/101026 (uri)
language
en (iso)
publisher
Wake Forest University
type
Dissertation

Usage Statistics