I am interested in helping people improve their decisions, using techniques from reinforcement learning such as inverse reinforcement learning and shaping.
Prior to joining the lab, I worked as a research specialist in Yael Niv's lab at Princeton University. Before that, I worked as a research intern in Yossi Yovel's lab at Tel Aviv University and as an undergraduate research assistant in the Gabrieli Lab at MIT.
CogSci 2020 Poster: Measuring the costs of planning
In this video, I provide a brief overview of my thesis work for the project "Measuring the Cost of Planning with Bayesian Inverse Reinforcement Learning".
Goal-setting can be a powerful strategy for inciting action and increasing productivity (Locke & Latham, 1990) and meaningful goals can be a sustainable source of happiness (Niemic, Ryan, and Deci, 2009). But this o...
In this project, we investigate to which extent seemingly irrational planning decisions are a consequence of how people individually experience the costs and benefits of deliberate decision-making. We start from the empirically-grounded assumptio...
What should I work on first? What can wait until later? Which projects should I prioritize and which tasks are not worth my time? These are challenging questions that many people face every day. People’s intuitive strategy is to prioritize their immediate experience over the long-term consequences. This leads to procrastination and the neglect of important long-term projects in favor of seemingly urgent tasks that are less important. Optimal gamification strives to help people overcome these problems by incentivizing each task by a number of points that communicates how valuable it is in the long-run. Unfortunately, computing the optimal number of points with standard dynamic programming methods quickly becomes intractable as the number of a person’s projects and the number of tasks required by each project increase. Here, we introduce and evaluate a scalable method for identifying which tasks are most important in the long run and incentivizing each task according to its long-term value. Our method makes it possible to create to-do list gamification apps that can handle the size and complexity of people’s to-do lists in the real world.
Which information is worth considering depends on how much effort it would take to acquire and process it. From this perspective people’s tendency to neglect considering the long-term consequences of their actions (present bias) might reflect that looking further into the future becomes increasingly more effortful. In this work, we introduce and validate the use of Bayesian Inverse Reinforcement Learning (BIRL) for measuring individual differences in the subjective costs of planning. We extend the resource-rational model of human planning introduced by Callaway, Lieder, et al. (2018) by parameterizing the cost of planning. Using BIRL, we show that increased subjective cost for considering future outcomes may be associated with both the present bias and acting without planning. Our results highlight testing the causal effects of the cost of planning on both present bias and mental effort avoidance as a promising direction for future work.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems