Research

Cognitive processes in instrumental learning

[w/ Anne Collins] My graduate work led me to consider how more complex skills involve an interaction between executive function and low-level processes; my ongoing postdoc research is focused on further mapping and modeling this interaction, exploring its effects on decision dynamics, and leveraging the multiple-system framework to address fundamental open questions about goal-driven learning. Everyday tasks -- like making coffee -- often involve keeping a set of goals in mind (e.g. grinding the coffee, measuring it into the filter, etc.), performing highly stereotyped movements (e.g. pouring water, stirring), and monitoring changes in the environment (e.g. Is that my phone ringing? Is this milk still fresh?). All of these factors cohabitate under the umbrella of the "skill" of making your morning cup. How are complex skills like this learned? I apply cutting-edge computational and neurophysiological techniques to investigate how high-level cognitive representations interact with low-level motor and reward systems during learning.

Cognitive processes in sensorimotor learning

[w/ Jordan Taylor] One of my primary research interests involves decomposing learning curves into distinct, dissociable learning systems. Traditionally, motor learning has been studied mainly as an implicit learning process, where motor errors iteratively calibrate performance in a slow, gradual manner. Recent developments have focused on the relevance of other mechanisms supporting sensorimotor learning, including reinforcement learning and the use of high-level cognitive strategies and planning. In terms of the latter, converging lines of evidence from behavioral, computational, and neuropsychological studies suggest that rapidly-formed cognitive action selection policies play a major role in motor learning (especially in the earliest phases), and affect how learning generalizes to novel situations.

Action execution vs. action selection in reinforcement learning computations

[w/ Rich Ivry, Jordan Taylor, & Yael Niv] The characterization of reward prediction errors in the mesostriatal dopamine system, and the modeling of these neural signals with simple machine learning algorithms, has been a notable success in modern systems neuroscience. Research in this field has shown how decision making (action selection, or what to choose) is guided by value reprsentations that are dynamically updated by reward prediction errors. However, it is not clear how action execution (how to implement a choice) influences decision making. Disentangling action execution and action selection in decision making tasks suggests that reward predcition errors are indeed sensitive to this distinction, and appear to track the underlying cause (selection vs. execution) of unanticipated choice outcomes.

Working memory, and "learning what to learn" in the motor domain

[w/ Jordan Taylor] Cognitive strategies in motor learning can take a variety of forms: In some cases, it is optimal to learn the structure of the environment, allowing one to infer how to behave in novel situations. In other cases, one may want to learn a simple set of stimulus-response contingencies, making learning faster and more precise, and laying the foundation of a habit. Using simple sensorimotor learning tasks, these learning styles can be dissociated, and shown to reflect different instantiations of working memory.


me scholar git me