Header logo is


2018


no image
Discovering and Teaching Optimal Planning Strategies

Lieder, F., Callaway, F., Krueger, P. M., Das, P., Griffiths, T. L., Gul, S.

In The 14th biannual conference of the German Society for Cognitive Science, GK, September 2018, Falk Lieder and Frederick Callaway contributed equally to this publication. (inproceedings)

Abstract
How should we think and decide, and how can we learn to make better decisions? To address these questions we formalize the discovery of cognitive strategies as a metacognitive reinforcement learning problem. This formulation leads to a computational method for deriving optimal cognitive strategies and a feedback mechanism for accelerating the process by which people learn how to make better decisions. As a proof of concept, we apply our approach to develop an intelligent system that teaches people optimal planning stratgies. Our training program combines a novel process-tracing paradigm that makes peoples latent planning strategies observable with an intelligent system that gives people feedback on how their planning strategy could be improved. The pedagogy of our intelligent tutor is based on the theory that people discover their cognitive strategies through metacognitive reinforcement learning. Concretely, the tutor’s feedback is designed to maximally accelerate people’s metacognitive reinforcement learning towards the optimal cognitive strategy. A series of four experiments confirmed that training with the cognitive tutor significantly improved people’s decision-making competency: Experiment 1 demonstrated that the cognitive tutor’s feedback accelerates participants’ metacognitive learning. Experiment 2 found that this training effect transfers to more difficult planning problems in more complex environments. Experiment 3 found that these transfer effects are retained for at least 24 hours after the training. Finally, Experiment 4 found that practicing with the cognitive tutor conveys additional benefits above and beyond verbal description of the optimal planning strategy. The results suggest that promoting metacognitive reinforcement learning with optimal feedback is a promising approach to improving the human mind.

link (url) Project Page [BibTex]

2018

link (url) Project Page [BibTex]


no image
Discovering Rational Heuristics for Risky Choice

Gul, S., Krueger, P. M., Callaway, F., Griffiths, T. L., Lieder, F.

The 14th biannual conference of the German Society for Cognitive Science, GK, The 14th biannual conference of the German Society for Cognitive Science, GK, September 2018 (conference)

Abstract
How should we think and decide to make the best possible use of our precious time and limited cognitive resources? And how do people’s cognitive strategies compare to this ideal? We study these questions in the domain of multi-alternative risky choice using the methodology of resource-rational analysis. To answer the first question, we leverage a new meta-level reinforcement learning algorithm to derive optimal heuristics for four different risky choice environments. We find that our method rediscovers two fast-and-frugal heuristics that people are known to use, namely Take-The-Best and choosing randomly, as resource-rational strategies for specific environments. Our method also discovered a novel heuristic that combines elements of Take-The-Best and Satisficing. To answer the second question, we use the Mouselab paradigm to measure how people’s decision strategies compare to the predictions of our resource-rational analysis. We found that our resource-rational analysis correctly predicted which strategies people use and under which conditions they use them. While people generally tend to make rational use of their limited resources overall, their strategy choices do not always fully exploit the structure of each decision problem. Overall, people’s decision operations were about 88% as resource-rational as they could possibly be. A formal model comparison confirmed that our resource-rational model explained people’s decision strategies significantly better than the Directed Cognition model of Gabaix et al. (2006). Our study is a proof-of-concept that optimal cognitive strategies can be automatically derived from the principle of resource-rationality. Our results suggest that resource-rational analysis is a promising approach for uncovering people’s cognitive strategies and revisiting the debate about human rationality with a more realistic normative standard.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Learning to Select Computations

Callaway, F., Gul, S., Krueger, P. M., Griffiths, T. L., Lieder, F.

In Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference, August 2018, Frederick Callaway and Sayan Gul and Falk Lieder contributed equally to this publication. (inproceedings)

Abstract
The efficient use of limited computational resources is an essential ingredient of intelligence. Selecting computations optimally according to rational metareasoning would achieve this, but this is computationally intractable. Inspired by psychology and neuroscience, we propose the first concrete and domain-general learning algorithm for approximating the optimal selection of computations: Bayesian metalevel policy search (BMPS). We derive this general, sample-efficient search algorithm for a computation-selecting metalevel policy based on the insight that the value of information lies between the myopic value of information and the value of perfect information. We evaluate BMPS on three increasingly difficult metareasoning problems: when to terminate computation, how to allocate computation between competing options, and planning. Across all three domains, BMPS achieved near-optimal performance and compared favorably to previously proposed metareasoning heuristics. Finally, we demonstrate the practical utility of BMPS in an emergency management scenario, even accounting for the overhead of metareasoning.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


A resource-rational analysis of human planning
A resource-rational analysis of human planning

Callaway, F., Lieder, F., Das, P., Gul, S., Krueger, P. M., Griffiths, T. L.

In Proceedings of the 40th Annual Conference of the Cognitive Science Society, May 2018, Frederick Callaway and Falk Lieder contributed equally to this publication. (inproceedings)

Abstract
People's cognitive strategies are jointly shaped by function and computational constraints. Resource-rational analysis leverages these constraints to derive rational models of people's cognitive strategies from the assumption that people make rational use of limited cognitive resources. We present a resource-rational analysis of planning and evaluate its predictions in a newly developed process tracing paradigm. In Experiment 1, we find that a resource-rational planning strategy predicts the process by which people plan more accurately than previous models of planning. Furthermore, in Experiment 2, we find that it also captures how people's planning strategies adapt to the structure of the environment. In addition, our approach allows us to quantify for the first time how close people's planning strategies are to being resource-rational and to characterize in which ways they conform to and deviate from optimal planning.

DOI [BibTex]

DOI [BibTex]


no image
Rational metareasoning and the plasticity of cognitive control

Lieder, F., Shenhav, A., Musslick, S., Griffiths, T. L.

PLOS Computational Biology, 14(4):e1006043, Public Library of Science, April 2018 (article)

Abstract
The human brain has the impressive capacity to adapt how it processes information to high-level goals. While it is known that these cognitive control skills are malleable and can be improved through training, the underlying plasticity mechanisms are not well understood. Here, we develop and evaluate a model of how people learn when to exert cognitive control, which controlled process to use, and how much effort to exert. We derive this model from a general theory according to which the function of cognitive control is to select and configure neural pathways so as to make optimal use of finite time and limited computational resources. The central idea of our Learned Value of Control model is that people use reinforcement learning to predict the value of candidate control signals of different types and intensities based on stimulus features. This model correctly predicts the learning and transfer effects underlying the adaptive control-demanding behavior observed in an experiment on visual attention and four experiments on interference control in Stroop and Flanker paradigms. Moreover, our model explained these findings significantly better than an associative learning model and a Win-Stay Lose-Shift model. Our findings elucidate how learning and experience might shape people’s ability and propensity to adaptively control their minds and behavior. We conclude by predicting under which circumstances these learning mechanisms might lead to self-control failure.

Rational metareasoning and the plasticity of cognitive control DOI Project Page Project Page [BibTex]

Rational metareasoning and the plasticity of cognitive control DOI Project Page Project Page [BibTex]


no image
Over-Representation of Extreme Events in Decision Making Reflects Rational Use of Cognitive Resources

Lieder, F., Griffiths, T. L., Hsu, M.

Psychological Review, 125(1):1-32, January 2018 (article)

Abstract
People’s decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision-makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision-makers should over-represent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people’s availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall.

DOI [BibTex]

DOI [BibTex]


no image
Beyond Bounded Rationality: Reverse-Engineering and Enhancing Human Intelligence

(Glushko Prize 2020)

Lieder, F.

University of California, Berkeley, 2018 (phdthesis)

Abstract
Bad decisions can have devastating consequences: There is a vast body of literature claiming that human judgment and decision-making are riddled with numerous systematic violations of the rules of logic, probability theory, and expected utility theory. The discovery of these cognitive biases in the 1970s (Tversky & Kahneman, 1974) made people question the concept of Homo sapiens as the rational animal, profoundly shaking the foundations of economics and rational models in the cognitive, neural, and social sciences. Four decades later, these disciplines still lack a rigorous theoretical foundation for explaining and remedying people’s cognitive biases. To solve this problem, my dissertation offers a mathematically precise theory of bounded rationality and demonstrates how it can be leveraged to elucidate the cognitive mechanisms of judgment and decision-making (Part 1) and to help people make better decisions (Part 2).

Précis of Beyond Bounded Rationality: Reverse-Engineering and Enhancing Human Intelligence DOI [BibTex]


no image
The Computational Challenges of Pursuing Multiple Goals: Network Structure of Goal Systems Predicts Human Performance

Reichman, D., Lieder, F., Bourgin, D. D., Talmon, N., Griffiths, T. L.

PsyArXiv, 2018 (article)

Abstract
Extant psychological theories attribute people’s failure to achieve their goals primarily to failures of self-control, insufficient motivation, or lacking skills. We develop a complementary theory specifying conditions under which the computational complexity of making the right decisions becomes prohibitive of goal achievement regardless of skill or motivation. We support our theory by predicting human performance from factors determining the computational complexity of selecting the optimal set of means for goal achievement. Following previous theories of goal pursuit, we express the relationship between goals and means as a bipartite graph where edges between means and goals indicate which means can be used to achieve which goals. This allows us to map two computational challenges that arise in goal achievement onto two classic combinatorial optimization problems: Set Cover and Maximum Coverage. While these problems are believed to be computationally intractable on general networks, their solution can be nevertheless efficiently approximated when the structure of the network resembles a tree. Thus, our initial prediction was that people should perform better with goal systems that are more tree-like. In addition, our theory predicted that people’s performance at selecting means should be a U-shaped function of the average number of goals each means is relevant to and the average number of means through which each goal could be accomplished. Here we report on six behavioral experiments which confirmed these predictions. Our results suggest that combinatorial parameters that are instrumental to algorithm design can also be useful for understanding when and why people struggle to pursue their goals effectively.

DOI [BibTex]

2016


no image
Helping people make better decisions using optimal gamification

Lieder, F., Griffiths, T. L.

In Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016 (inproceedings)

Abstract
Game elements like points and levels are a popular tool to nudge and engage students and customers. Yet, no theory can tell us which incentive structures work and how to design them. Here we connect the practice of gamification to the theory of reward shaping in reinforcement learning. We leverage this connection to develop a method for designing effective incentive structures and delineating when gamification will succeed from when it will fail. We evaluate our method in two behavioral experiments. The results of the first experiment demonstrate that incentive structures designed by our method help people make better, less short-sighted decisions and avoid the pitfalls of less principled approaches. The results of the second experiment illustrate that such incentive structures can be effectively implemented using game elements like points and badges. These results suggest that our method provides a principled way to leverage gamification to help people make better decisions.

link (url) Project Page [BibTex]

2016

link (url) Project Page [BibTex]