Luke Coddington
Recent success in training artificial agents and robots derives from a combination of direct learning of behavioral policies and indirect learning via value functions. Policy learning and value learning employ distinct algorithms that optimize behavioral performance and reward prediction, respectively. In animals, behavioral learning and the role of mesolimbic dopamine signaling have been extensively evaluated with respect to reward prediction; however, to date there has been little consideration of how direct policy learning might inform our understanding. Here we used a comprehensive dataset of orofacial and body movements to understand how behavioral policies evolve as naive, head-restrained mice learned a trace conditioning paradigm. Individual differences in initial dopaminergic reward responses correlated with the emergence of learned behavioral policy, but not the emergence of putative value encoding for a predictive cue. Likewise, physiologically-calibrated manipulations of mesolimbic dopamine produced multiple effects inconsistent with value learning but predicted by a neural network-based model that used dopamine signals to set an adaptive rate, not an error signal, for behavioral policy learning. This work provides strong evidence that phasic dopamine activity can regulate direct learning of behavioral policies, expanding the explanatory power of reinforcement learning models for animal learning.
Mesolimbic dopamine adapts the rate of learning from action - Nature
https://twitter.com/jtdudman/status/1618637528072073221?s=20&t=zS4_JB9YDds9MtbgAPkuHw
Neuromodulation in the brain | WWNeuRise
https://github.com/dudmanj/RNN_learnDA
In Vivo Optogenetics with Stimulus Calibration