Header logo is


2020


A Gamified App that Helps People Overcome Self-Limiting Beliefs by Promoting Metacognition
A Gamified App that Helps People Overcome Self-Limiting Beliefs by Promoting Metacognition

Amo, V., Lieder, F.

SIG 8 Meets SIG 16, September 2020 (conference) Accepted

Abstract
Previous research has shown that approaching learning with a growth mindset is key for maintaining motivation and overcoming setbacks. Mindsets are systems of beliefs that people hold to be true. They influence a person's attitudes, thoughts, and emotions when they learn something new or encounter challenges. In clinical psychology, metareasoning (reflecting on one's mental processes) and meta-awareness (recognizing thoughts as mental events instead of equating them to reality) have proven effective for overcoming maladaptive thinking styles. Hence, they are potentially an effective method for overcoming self-limiting beliefs in other domains as well. However, the potential of integrating assisted metacognition into mindset interventions has not been explored yet. Here, we propose that guiding and training people on how to leverage metareasoning and meta-awareness for overcoming self-limiting beliefs can significantly enhance the effectiveness of mindset interventions. To test this hypothesis, we develop a gamified mobile application that guides and trains people to use metacognitive strategies based on Cognitive Restructuring (CR) and Acceptance Commitment Therapy (ACT) techniques. The application helps users to identify and overcome self-limiting beliefs by working with aversive emotions when they are triggered by fixed mindsets in real-life situations. Our app aims to help people sustain their motivation to learn when they face inner obstacles (e.g. anxiety, frustration, and demotivation). We expect the application to be an effective tool for helping people better understand and develop the metacognitive skills of emotion regulation and self-regulation that are needed to overcome self-limiting beliefs and develop growth mindsets.

A gamified app that helps people overcome self-limiting beliefs by promoting metacognition [BibTex]


Optimal To-Do List Gamification
Optimal To-Do List Gamification

Stojcheski, J., Felso, V., Lieder, F.

arXiv, August 2020 (techreport)

Abstract
What should I work on first? What can wait until later? Which projects should I prioritize and which tasks are not worth my time? These are challenging questions that many people face every day. People’s intuitive strategy is to prioritize their immediate experience over the long-term consequences. This leads to procrastination and the neglect of important long-term projects in favor of seemingly urgent tasks that are less important. Optimal gamification strives to help people overcome these problems by incentivizing each task by a number of points that communicates how valuable it is in the long-run. Unfortunately, computing the optimal number of points with standard dynamic programming methods quickly becomes intractable as the number of a person’s projects and the number of tasks required by each project increase. Here, we introduce and evaluate a scalable method for identifying which tasks are most important in the long run and incentivizing each task according to its long-term value. Our method makes it possible to create to-do list gamification apps that can handle the size and complexity of people’s to-do lists in the real world.

link (url) Project Page [BibTex]


no image
How to navigate everyday distractions: Leveraging optimal feedback to train attention control

Wirzberger, M., Lado, A., Eckerstorfer, L., Oreshnikov, I., Passy, J., Stock, A., Shenhav, A., Lieder, F.

Annual Meeting of the Cognitive Science Society, July 2020 (conference)

Abstract
To stay focused on their chosen tasks, people have to inhibit distractions. The underlying attention control skills can improve through reinforcement learning, which can be accelerated by giving feedback. We applied the theory of metacognitive reinforcement learning to develop a training app that gives people optimal feedback on their attention control while they are working or studying. In an eight-day field experiment with 99 participants, we investigated the effect of this training on people's productivity, sustained attention, and self-control. Compared to a control condition without feedback, we found that participants receiving optimal feedback learned to focus increasingly better (f = .08, p < .01) and achieved higher productivity scores (f = .19, p < .01) during the training. In addition, they evaluated their productivity more accurately (r = .12, p < .01). However, due to asymmetric attrition problems, these findings need to be taken with a grain of salt.

How to navigate everyday distractions: Leveraging optimal feedback to train attention control DOI Project Page [BibTex]


no image
Measuring the Costs of Planning

Felso, V., Jain, Y. R., Lieder, F.

CogSci 2020, July 2020 (poster) Accepted

Abstract
Which information is worth considering depends on how much effort it would take to acquire and process it. From this perspective people’s tendency to neglect considering the long-term consequences of their actions (present bias) might reflect that looking further into the future becomes increasingly more effortful. In this work, we introduce and validate the use of Bayesian Inverse Reinforcement Learning (BIRL) for measuring individual differences in the subjective costs of planning. We extend the resource-rational model of human planning introduced by Callaway, Lieder, et al. (2018) by parameterizing the cost of planning. Using BIRL, we show that increased subjective cost for considering future outcomes may be associated with both the present bias and acting without planning. Our results highlight testing the causal effects of the cost of planning on both present bias and mental effort avoidance as a promising direction for future work.

[BibTex]

[BibTex]


no image
Leveraging Machine Learning to Automatically Derive Robust Planning Strategies from Biased Models of the Environment

Kemtur, A., Jain, Y. R., Mehta, A., Callaway, F., Consul, S., Stojcheski, J., Lieder, F.

CogSci 2020, July 2020, Anirudha Kemtur and Yash Raj Jain contributed equally to this publication. (conference)

Abstract
Teaching clever heuristics is a promising approach to improve decision-making. We can leverage machine learning to discover clever strategies automatically. Current methods require an accurate model of the decision problems people face in real life. But most models are misspecified because of limited information and cognitive biases. To address this problem we develop strategy discovery methods that are robust to model misspecification. Robustness is achieved by model-ing model-misspecification and handling uncertainty about the real-world according to Bayesian inference. We translate our methods into an intelligent tutor that automatically discovers and teaches robust planning strategies. Our robust cognitive tutor significantly improved human decision-making when the model was so biased that conventional cognitive tutors were no longer effective. These findings highlight that our robust strategy discovery methods are a significant step towards leveraging artificial intelligence to improve human decision-making in the real world.

Project Page [BibTex]

Project Page [BibTex]


no image
Automatic Discovery of Interpretable Planning Strategies

Skirzyński, J., Becker, F., Lieder, F.

Machine Learning Journal, May 2020 (article) Submitted

Abstract
When making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decisionmakers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that the provision of decision rules as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Furthermore, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making.

Automatic Discovery of Interpretable Planning Strategies The code for our algorithm and the experiments is available Project Page [BibTex]


no image
Advancing Rational Analysis to the Algorithmic Level

Lieder, F., Griffiths, T. L.

Behavioral and Brain Sciences, 43, E27, March 2020 (article)

Abstract
The commentaries raised questions about normativity, human rationality, cognitive architectures, cognitive constraints, and the scope or resource rational analysis (RRA). We respond to these questions and clarify that RRA is a methodological advance that extends the scope of rational modeling to understanding cognitive processes, why they differ between people, why they change over time, and how they could be improved.

Advancing rational analysis to the algorithmic level DOI [BibTex]

Advancing rational analysis to the algorithmic level DOI [BibTex]


no image
Learning to Overexert Cognitive Control in a Stroop Task

Bustamante, L., Lieder, F., Musslick, S., Shenhav, A., Cohen, J.

Febuary 2020, Laura Bustamante and Falk Lieder contributed equally to this publication. (article) In revision

Abstract
How do people learn when to allocate how much cognitive control to which task? According to the Learned Value of Control (LVOC) model, people learn to predict the value of alternative control allocations from features of a given situation. This suggests that people may generalize the value of control learned in one situation to other situations with shared features, even when the demands for cognitive control are different. This makes the intriguing prediction that what a person learned in one setting could, under some circumstances, cause them to misestimate the need for, and potentially over-exert control in another setting, even if this harms their performance. To test this prediction, we had participants perform a novel variant of the Stroop task in which, on each trial, they could choose to either name the color (more control-demanding) or read the word (more automatic). However only one of these tasks was rewarded, it changed from trial to trial, and could be predicted by one or more of the stimulus features (the color and/or the word). Participants first learned colors that predicted the rewarded task. Then they learned words that predicted the rewarded task. In the third part of the experiment, we tested how these learned feature associations transferred to novel stimuli with some overlapping features. The stimulus-task-reward associations were designed so that for certain combinations of stimuli the transfer of learned feature associations would incorrectly predict that more highly rewarded task would be color naming, which would require the exertion of control, even though the actually rewarded task was word reading and therefore did not require the engagement of control. Our results demonstrated that participants over-exerted control for these stimuli, providing support for the feature-based learning mechanism described by the LVOC model.

Learning to Overexert Cognitive Control in a Stroop Task DOI [BibTex]

Learning to Overexert Cognitive Control in a Stroop Task DOI [BibTex]


Toward a Formal Theory of Proactivity
Toward a Formal Theory of Proactivity

Lieder, F., Iwama, G.

January 2020 (article) Submitted

Abstract
Beyond merely reacting to their environment and impulses, people have the remarkable capacity to proactively set and pursue their own goals. But the extent to which they leverage this capacity varies widely across people and situations. The goal of this article is to make the mechanisms and variability of proactivity more amenable to rigorous experiments and computational modeling. We proceed in three steps. First, we develop and validate a mathematically precise behavioral measure of proactivity and reactivity that can be applied across a wide range of experimental paradigms. Second, we propose a formal definition of proactivity and reactivity, and develop a computational model of proactivity in the AX Continuous Performance Task (AX-CPT). Third, we develop and test a computational-level theory of meta-control over proactivity in the AX-CPT that identifies three distinct meta-decision-making problems: intention setting, resolving response conflict between intentions and automaticity, and deciding whether to recall context and intentions into working memory. People's response frequencies in the AX-CPT were remarkably well captured by a mixture between the predictions of our models of proactive and reactive control. Empirical data from an experiment varying the incentives and contextual load of an AX-CPT confirmed the predictions of our meta-control model of individual differences in proactivity. Our results suggest that proactivity can be understood in terms of computational models of meta-control. Our model makes additional empirically testable predictions. Future work will extend our models from proactive control in the AX-CPT to proactive goal creation and goal pursuit in the real world.

Toward a formal theory of proactivity DOI Project Page [BibTex]


Resource-Rational Models of Human Goal Pursuit
Resource-Rational Models of Human Goal Pursuit

Prystawski, B., Mohnert, F., Tošić, M., Lieder, F.

2020 (article)

Abstract
Goal-directed behaviour is a deeply important part of human psychology. People constantly set goals for themselves and pursue them in many domains of life. In this paper, we develop computational models that characterize how humans pursue goals in a complex dynamic environment and test how well they describe human behaviour in an experiment. Our models are motivated by the principle of resource rationality and draw upon psychological insights about people's limited attention and planning capacities. We found that human goal pursuit is qualitatively different and substantially less efficient than optimal goal pursuit. Models of goal pursuit based on the principle of resource rationality captured human behavior better than both a model of optimal goal pursuit and heuristics that are not resource-rational. We conclude that human goal pursuit is jointly shaped by its function, the structure of the environment, and cognitive costs and constraints on human planning and attention. Our findings are an important step toward understanding humans goal pursuit, as cognitive limitations play a crucial role in shaping people's goal-directed behaviour.

Resource-rational models of human goal pursuit DOI [BibTex]


no image
ACTrain: Ein KI-basiertes Aufmerksamkeitstraining für die Wissensarbeit [ACTrain: An AI-based attention training for knowledge work]

Wirzberger, M., Oreshnikov, I., Passy, J., Lado, A., Shenhav, A., Lieder, F.

66th Spring Conference of the German Ergonomics Society, 2020 (conference)

Abstract
Unser digitales Zeitalter lebt von Informationen und stellt unsere begrenzte Verarbeitungskapazität damit täglich auf die Probe. Gerade in der Wissensarbeit haben ständige Ablenkungen erhebliche Leistungseinbußen zur Folge. Unsere intelligente Anwendung ACTrain setzt genau an dieser Stelle an und verwandelt Computertätigkeiten in eine Trainingshalle für den Geist. Feedback auf Basis maschineller Lernverfahren zeigt anschaulich den Wert auf, sich nicht von einer selbst gewählten Aufgabe ablenken zu lassen. Diese metakognitive Einsicht soll zum Durchhalten motivieren und das zugrunde liegende Fertigkeitsniveau der Aufmerksamkeitskontrolle stärken. In laufenden Feldexperimenten untersuchen wir die Frage, ob das Training mit diesem optimalen Feedback die Aufmerksamkeits- und Selbstkontrollfertigkeiten im Vergleich zu einer Kontrollgruppe ohne Feedback verbessern kann.

link (url) Project Page [BibTex]

2019


Life Improvement Science: A Manifesto
Life Improvement Science: A Manifesto

Lieder, F.

December 2019 (article) In revision

Abstract
Rapid technological advances present unprecedented opportunities for helping people thrive. This manifesto presents a road map for establishing a solid scientific foundation upon which those opportunities can be realized. It highlights fundamental open questions about the cognitive underpinnings of effective living and how they can be improved, supported, and augmented. These questions are at the core of my proposal for a new transdisciplinary research area called life improvement science. Recent advances have made these questions amenable to scientific rigor, and emerging approaches are paving the way towards practical strategies, clever interventions, and (intelligent) apps for empowering people to reach unprecedented levels of personal effectiveness and wellbeing.

Life improvement science: a manifesto DOI [BibTex]


no image
Doing More with Less: Meta-Reasoning and Meta-Learning in Humans and Machines

Griffiths, T. L., Callaway, F., Chang, M. B., Grant, E., Krueger, P. M., Lieder, F.

Current Opinion in Behavioral Sciences, 29, pages: 24-30, October 2019 (article)

Abstract
Artificial intelligence systems use an increasing amount of computation and data to solve very specific problems. By contrast, human minds solve a wide range of problems using a fixed amount of computation and limited experience. We identify two abilities that we see as crucial to this kind of general intelligence: meta-reasoning (deciding how to allocate computational resources) and meta-learning (modeling the learning environment to make better use of limited data). We summarize the relevant AI literature and relate the resulting ideas to recent work in psychology.

DOI [BibTex]

DOI [BibTex]


How do people learn how to plan?
How do people learn how to plan?

Jain, Y. R., Gupta, S., Rakesh, V., Dayan, P., Callaway, F., Lieder, F.

Conference on Cognitive Computational Neuroscience, September 2019 (conference)

Abstract
How does the brain learn how to plan? We reverse-engineer people's underlying learning mechanisms by combining rational process models of cognitive plasticity with recently developed empirical methods that allow us to trace the temporal evolution of people's planning strategies. We find that our Learned Value of Computation model (LVOC) accurately captures people's average learning curve. However, there were also substantial individual differences in metacognitive learning that are best understood in terms of multiple different learning mechanisms-including strategy selection learning. Furthermore, we observed that LVOC could not fully capture people's ability to adaptively decide when to stop planning. We successfully extended the LVOC model to address these discrepancies. Our models broadly capture people's ability to improve their decision mechanisms and represent a significant step towards reverse-engineering how the brain learns increasingly effective cognitive strategies through its interaction with the environment.

How do people learn to plan? How do people learn to plan? [BibTex]

How do people learn to plan? How do people learn to plan? [BibTex]


no image
Testing Computational Models of Goal Pursuit

Mohnert, F., Tosic, M., Lieder, F.

CCN2019, September 2019 (conference)

Abstract
Goals are essential to human cognition and behavior. But how do we pursue them? To address this question, we model how capacity limits on planning and attention shape the computational mechanisms of human goal pursuit. We test the predictions of a simple model based on previous theories in a behavioral experiment. The results show that to fully capture how people pursue their goals it is critical to account for people’s limited attention in addition to their limited planning. Our findings elucidate the cognitive constraints that shape human goal pursuit and point to an improved model of human goal pursuit that can reliably predict which goals a person will achieve and which goals they will struggle to pursue effectively.

link (url) DOI Project Page [BibTex]


Cognitive Prostheses for Goal Achievement
Cognitive Prostheses for Goal Achievement

Lieder, F., Chen, O. X., Krueger, P. M., Griffiths, T. L.

Nature Human Behavior, 3, August 2019 (article)

Abstract
Procrastination and impulsivity take a significant toll on people’s lives and the economy at large. Both can result from the misalignment of an action's proximal rewards with its long-term value. Therefore, aligning immediate reward with long-term value could be a way to help people overcome motivational barriers and make better decisions. Previous research has shown that game elements, such as points, levels, and badges, can be used to motivate people and nudge their decisions on serious matters. Here, we develop a new approach to decision support that leveragesartificial intelligence and game elements to restructure challenging sequential decision problems in such a way that it becomes easier for people to take the right course of action. A series of four increasingly more realistic experiments suggests that this approach can enable people to make better decisions faster, procrastinate less, complete their work on time, and waste less time on unimportant tasks. These findings suggest that our method is a promising step towards developing cognitive prostheses that help people achieve their goals by enhancing their motivation and decision-making in everyday life.

DOI [BibTex]

DOI [BibTex]


no image
Measuring How People Learn How to Plan

Jain, Y. R., Callaway, F., Lieder, F.

Proceedings 41st Annual Meeting of the Cognitive Science Society, pages: 1956-1962, CogSci2019, 41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

Abstract
The human mind has an unparalleled ability to acquire complex cognitive skills, discover new strategies, and refine its ways of thinking and decision-making; these phenomena are collectively known as cognitive plasticity. One important manifestation of cognitive plasticity is learning to make better–more far-sighted–decisions via planning. A serious obstacle to studying how people learn how to plan is that cognitive plasticity is even more difficult to observe than cognitive strategies are. To address this problem, we develop a computational microscope for measuring cognitive plasticity and validate it on simulated and empirical data. Our approach employs a process tracing paradigm recording signatures of human planning and how they change over time. We then invert a generative model of the recorded changes to infer the underlying cognitive plasticity. Our computational microscope measures cognitive plasticity significantly more accurately than simpler approaches, and it correctly detected the effect of an external manipulation known to promote cognitive plasticity. We illustrate how computational microscopes can be used to gain new insights into the time course of metacognitive learning and to test theories of cognitive development and hypotheses about the nature of cognitive plasticity. Future work will leverage our computational microscope to reverse-engineer the learning mechanisms enabling people to acquire complex cognitive skills such as planning and problem solving.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Extending Rationality

Pothos, E. M., Busemeyer, J. R., Pleskac, T., Yearsley, J. M., Tenenbaum, J. B., Goodman, N. D., Tessler, M. H., Griffiths, T. L., Lieder, F., Hertwig, R., Pachur, T., Leuker, C., Shiffrin, R. M.

Proceedings of the 41st Annual Conference of the Cognitive Science Society, pages: 39-40, CogSci 2019, July 2019 (conference)

Proceedings of the 41st Annual Conference of the Cognitive Science Society [BibTex]

Proceedings of the 41st Annual Conference of the Cognitive Science Society [BibTex]


How should we incentivize learning? An optimal feedback mechanism for educational games and online courses
How should we incentivize learning? An optimal feedback mechanism for educational games and online courses

Xu, L., Wirzberger, M., Lieder, F.

41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

Abstract
Online courses offer much-needed opportunities for lifelong self-directed learning, but people rarely follow through on their noble intentions to complete them. To increase student retention educational software often uses game elements to motivate students to engage in and persist in learning activities. However, gamification only works when it is done properly, and there is currently no principled method that educational software could use to achieve this. We develop a principled feedback mechanism for encouraging good study choices and persistence in self-directed learning environments. Rather than giving performance feedback, our method rewards the learner's efforts with optimal brain points that convey the value of practice. To derive these optimal brain points, we applied the theory of optimal gamification to a mathematical model of skill acquisition. In contrast to hand-designed incentive structures, optimal brain points are constructed in such a way that the incentive system cannot be gamed. Evaluating our method in a behavioral experiment, we find that optimal brain points significantly increased the proportion of participants who instead of exploiting an inefficient skill they already knew-attempted to learn a difficult but more efficient skill, persisted through failure, and succeeded to master the new skill. Our method provides a principled approach to designing incentive structures and feedback mechanisms for educational games and online courses. We are optimistic that optimal brain points will prove useful for increasing student retention and helping people overcome the motivational obstacles that stand in the way of self-directed lifelong learning.

link (url) Project Page [BibTex]


no image
What’s in the Adaptive Toolbox and How Do People Choose From It? Rational Models of Strategy Selection in Risky Choice

Mohnert, F., Pachur, T., Lieder, F.

41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

Abstract
Although process data indicates that people often rely on various (often heuristic) strategies to choose between risky options, our models of heuristics cannot predict people's choices very accurately. To address this challenge, it has been proposed that people adaptively choose from a toolbox of simple strategies. But which strategies are contained in this toolbox? And how do people decide when to use which decision strategy? Here, we develop a model according to which each person selects decisions strategies rationally from their personal toolbox; our model allows one to infer which strategies are contained in the cognitive toolbox of an individual decision-maker and specifies when she will use which strategy. Using cross-validation on an empirical data set, we find that this rational model of strategy selection from a personal adaptive toolbox predicts people's choices better than any single strategy (even when it is allowed to vary across participants) and better than previously proposed toolbox models. Our model comparisons show that both inferring the toolbox and rational strategy selection are critical for accurately predicting people's risky choices. Furthermore, our model-based data analysis reveals considerable individual differences in the set of strategies people are equipped with and how they choose among them; these individual differences could partly explain why some people make better choices than others. These findings represent an important step towards a complete formalization of the notion that people select their cognitive strategies from a personal adaptive toolbox.

link (url) [BibTex]


no image
Measuring How People Learn How to Plan

Jain, Y. R., Callaway, F., Lieder, F.

pages: 357-361, RLDM 2019, July 2019 (conference)

Abstract
The human mind has an unparalleled ability to acquire complex cognitive skills, discover new strategies, and refine its ways of thinking and decision-making; these phenomena are collectively known as cognitive plasticity. One important manifestation of cognitive plasticity is learning to make better – more far-sighted – decisions via planning. A serious obstacle to studying how people learn how to plan is that cognitive plasticity is even more difficult to observe than cognitive strategies are. To address this problem, we develop a computational microscope for measuring cognitive plasticity and validate it on simulated and empirical data. Our approach employs a process tracing paradigm recording signatures of human planning and how they change over time. We then invert a generative model of the recorded changes to infer the underlying cognitive plasticity. Our computational microscope measures cognitive plasticity significantly more accurately than simpler approaches, and it correctly detected the effect of an external manipulation known to promote cognitive plasticity. We illustrate how computational microscopes can be used to gain new insights into the time course of metacognitive learning and to test theories of cognitive development and hypotheses about the nature of cognitive plasticity. Future work will leverage our computational microscope to reverse-engineer the learning mechanisms enabling people to acquire complex cognitive skills such as planning and problem solving.

link (url) [BibTex]

link (url) [BibTex]


no image
A Cognitive Tutor for Helping People Overcome Present Bias

Lieder, F., Callaway, F., Jain, Y. R., Krueger, P. M., Das, P., Gul, S., Griffiths, T. L.

RLDM 2019, July 2019, Falk Lieder and Frederick Callaway contributed equally to this publication. (conference)

Abstract
People's reliance on suboptimal heuristics gives rise to a plethora of cognitive biases in decision-making including the present bias, which denotes people's tendency to be overly swayed by an action's immediate costs/benefits rather than its more important long-term consequences. One approach to helping people overcome such biases is to teach them better decision strategies. But which strategies should we teach them? And how can we teach them effectively? Here, we leverage an automatic method for discovering rational heuristics and insights into how people acquire cognitive skills to develop an intelligent tutor that teaches people how to make better decisions. As a proof of concept, we derive the optimal planning strategy for a simple model of situations where people fall prey to the present bias. Our cognitive tutor teaches people this optimal planning strategy by giving them metacognitive feedback on how they plan in a 3-step sequential decision-making task. Our tutor's feedback is designed to maximally accelerate people's metacognitive reinforcement learning towards the optimal planning strategy. A series of four experiments confirmed that training with the cognitive tutor significantly reduced present bias and improved people's decision-making competency: Experiment 1 demonstrated that the cognitive tutor's feedback can help participants discover far-sighted planning strategies. Experiment 2 found that this training effect transfers to more complex environments. Experiment 3 found that these transfer effects are retained for at least 24 hours after the training. Finally, Experiment 4 found that practicing with the cognitive tutor can have additional benefits over being told the strategy in words. The results suggest that promoting metacognitive reinforcement learning with optimal feedback is a promising approach to improving the human mind.

DOI [BibTex]

DOI [BibTex]


no image
Introducing the Decision Advisor: A simple online tool that helps people overcome cognitive biases and experience less regret in real-life decisions

lawama, G., Greenberg, S., Moore, D., Lieder, F.

40th Annual Meeting of the Society for Judgement and Decision Making, June 2019 (conference)

Abstract
Cognitive biases shape many decisions people come to regret. To help people overcome these biases, Clear-erThinking.org developed a free online tool, called the Decision Advisor (https://programs.clearerthinking.org/decisionmaker.html). The Decision Advisor assists people in big real-life decisions by prompting them to generate more alternatives, guiding them to evaluate their alternatives according to principles of decision analysis, and educates them about pertinent biases while they are making their decision. In a within-subjects experiment, 99 participants reported significantly fewer biases and less regret for a decision supported by the Decision Advisor than for a previous unassisted decision.

DOI [BibTex]

DOI [BibTex]


no image
The Goal Characteristics (GC) questionannaire: A comprehensive measure for goals’ content, attainability, interestingness, and usefulness

Iwama, G., Wirzberger, M., Lieder, F.

40th Annual Meeting of the Society for Judgement and Decision Making, June 2019 (conference)

Abstract
Many studies have investigated how goal characteristics affect goal achievement. However, most of them considered only a small number of characteristics and the psychometric properties of their measures remains unclear. To overcome these limitations, we developed and validated a comprehensive questionnaire of goal characteristics with four subscales - measuring the goal’s content, attainability, interestingness, and usefulness respectively. 590 participants completed the questionnaire online. A confirmatory factor analysis supported the four subscales and their structure. The GC questionnaire (https://osf.io/qfhup) can be easily applied to investigate goal setting, pursuit and adjustment in a wide range of contexts.

DOI [BibTex]


no image
Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources

Lieder, F., Griffiths, T. L.

Behavioral and Brain Sciences, 43, E1, Febuary 2019 (article)

Abstract
Modeling human cognition is challenging because there are infinitely many mechanisms that can generate any given observation. Some researchers address this by constraining the hypothesis space through assumptions about what the human mind can and cannot do, while others constrain it through principles of rationality and adaptation. Recent work in economics, psychology, neuroscience, and linguistics has begun to integrate both approaches by augmenting rational models with cognitive constraints, incorporating rational principles into cognitive architectures, and applying optimality principles to understanding neural representations. We identify the rational use of limited resources as a unifying principle underlying these diverse approaches, expressing it in a new cognitive modeling paradigm called resource-rational analysis. The integration of rational principles with realistic cognitive constraints makes resource-rational analysis a promising framework for reverse-engineering cognitive mechanisms and representations. It has already shed new light on the debate about human rationality and can be leveraged to revisit classic questions of cognitive psychology within a principled computational framework. We demonstrate that resource-rational models can reconcile the mind's most impressive cognitive skills with people's ostensive irrationality. Resource-rational analysis also provides a new way to connect psychological theory more deeply with artificial intelligence, economics, neuroscience, and linguistics.

DOI [BibTex]


no image
Remediating Cognitive Decline with Cognitive Tutors

Das, P., Callaway, F., Griffiths, T. L., Lieder, F.

RLDM 2019, 2019 (conference)

Abstract
As people age, their cognitive abilities tend to deteriorate, including their ability to make complex plans. To remediate this cognitive decline, many commercial brain training programs target basic cognitive capacities, such as working memory. We have recently developed an alternative approach: intelligent tutors that teach people cognitive strategies for making the best possible use of their limited cognitive resources. Here, we apply this approach to improve older adults' planning skills. In a process-tracing experiment we found that the decline in planning performance may be partly because older adults use less effective planning strategies. We also found that, with practice, both older and younger adults learned more effective planning strategies from experience. But despite these gains there was still room for improvement-especially for older people. In a second experiment, we let older and younger adults train their planning skills with an intelligent cognitive tutor that teaches optimal planning strategies via metacognitive feedback. We found that practicing planning with this intelligent tutor allowed older adults to catch up to their younger counterparts. These findings suggest that intelligent tutors that teach clever cognitive strategies can help aging decision-makers stay sharp.

DOI [BibTex]

DOI [BibTex]


A Rational Reinterpretation of Dual Process Theories
A Rational Reinterpretation of Dual Process Theories

Milli, S., Lieder, F., Griffiths, T. L.

2019 (article)

Abstract
Highly influential "dual-process" accounts of human cognition postulate the coexistence of a slow accurate system with a fast error-prone system. But why would there be just two systems rather than, say, one or 93? Here, we argue that a dual-process architecture might be neither arbitrary nor irrational, but might instead reflect a rational tradeoff between the cognitive flexibility afforded by multiple systems and the time and effort required to choose between them. We investigate what the optimal set and number of cognitive systems would be depending on the structure of the environment. We find that the optimal number of systems depends on the variability of the environment and the difficulty of deciding when which system should be used. Furthermore, when having two systems is optimal, then the first system is fast but error-prone and the second system is slow but accurate. Our findings thereby provide a rational reinterpretation of dual-process theories.

DOI [BibTex]

DOI [BibTex]

2018


no image
Discovering and Teaching Optimal Planning Strategies

Lieder, F., Callaway, F., Krueger, P. M., Das, P., Griffiths, T. L., Gul, S.

In The 14th biannual conference of the German Society for Cognitive Science, GK, September 2018, Falk Lieder and Frederick Callaway contributed equally to this publication. (inproceedings)

Abstract
How should we think and decide, and how can we learn to make better decisions? To address these questions we formalize the discovery of cognitive strategies as a metacognitive reinforcement learning problem. This formulation leads to a computational method for deriving optimal cognitive strategies and a feedback mechanism for accelerating the process by which people learn how to make better decisions. As a proof of concept, we apply our approach to develop an intelligent system that teaches people optimal planning stratgies. Our training program combines a novel process-tracing paradigm that makes peoples latent planning strategies observable with an intelligent system that gives people feedback on how their planning strategy could be improved. The pedagogy of our intelligent tutor is based on the theory that people discover their cognitive strategies through metacognitive reinforcement learning. Concretely, the tutor’s feedback is designed to maximally accelerate people’s metacognitive reinforcement learning towards the optimal cognitive strategy. A series of four experiments confirmed that training with the cognitive tutor significantly improved people’s decision-making competency: Experiment 1 demonstrated that the cognitive tutor’s feedback accelerates participants’ metacognitive learning. Experiment 2 found that this training effect transfers to more difficult planning problems in more complex environments. Experiment 3 found that these transfer effects are retained for at least 24 hours after the training. Finally, Experiment 4 found that practicing with the cognitive tutor conveys additional benefits above and beyond verbal description of the optimal planning strategy. The results suggest that promoting metacognitive reinforcement learning with optimal feedback is a promising approach to improving the human mind.

link (url) Project Page [BibTex]

2018

link (url) Project Page [BibTex]


no image
Discovering Rational Heuristics for Risky Choice

Gul, S., Krueger, P. M., Callaway, F., Griffiths, T. L., Lieder, F.

The 14th biannual conference of the German Society for Cognitive Science, GK, The 14th biannual conference of the German Society for Cognitive Science, GK, September 2018 (conference)

Abstract
How should we think and decide to make the best possible use of our precious time and limited cognitive resources? And how do people’s cognitive strategies compare to this ideal? We study these questions in the domain of multi-alternative risky choice using the methodology of resource-rational analysis. To answer the first question, we leverage a new meta-level reinforcement learning algorithm to derive optimal heuristics for four different risky choice environments. We find that our method rediscovers two fast-and-frugal heuristics that people are known to use, namely Take-The-Best and choosing randomly, as resource-rational strategies for specific environments. Our method also discovered a novel heuristic that combines elements of Take-The-Best and Satisficing. To answer the second question, we use the Mouselab paradigm to measure how people’s decision strategies compare to the predictions of our resource-rational analysis. We found that our resource-rational analysis correctly predicted which strategies people use and under which conditions they use them. While people generally tend to make rational use of their limited resources overall, their strategy choices do not always fully exploit the structure of each decision problem. Overall, people’s decision operations were about 88% as resource-rational as they could possibly be. A formal model comparison confirmed that our resource-rational model explained people’s decision strategies significantly better than the Directed Cognition model of Gabaix et al. (2006). Our study is a proof-of-concept that optimal cognitive strategies can be automatically derived from the principle of resource-rationality. Our results suggest that resource-rational analysis is a promising approach for uncovering people’s cognitive strategies and revisiting the debate about human rationality with a more realistic normative standard.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Learning to Select Computations

Callaway, F., Gul, S., Krueger, P. M., Griffiths, T. L., Lieder, F.

In Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference, August 2018, Frederick Callaway and Sayan Gul and Falk Lieder contributed equally to this publication. (inproceedings)

Abstract
The efficient use of limited computational resources is an essential ingredient of intelligence. Selecting computations optimally according to rational metareasoning would achieve this, but this is computationally intractable. Inspired by psychology and neuroscience, we propose the first concrete and domain-general learning algorithm for approximating the optimal selection of computations: Bayesian metalevel policy search (BMPS). We derive this general, sample-efficient search algorithm for a computation-selecting metalevel policy based on the insight that the value of information lies between the myopic value of information and the value of perfect information. We evaluate BMPS on three increasingly difficult metareasoning problems: when to terminate computation, how to allocate computation between competing options, and planning. Across all three domains, BMPS achieved near-optimal performance and compared favorably to previously proposed metareasoning heuristics. Finally, we demonstrate the practical utility of BMPS in an emergency management scenario, even accounting for the overhead of metareasoning.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


A resource-rational analysis of human planning
A resource-rational analysis of human planning

Callaway, F., Lieder, F., Das, P., Gul, S., Krueger, P. M., Griffiths, T. L.

In Proceedings of the 40th Annual Conference of the Cognitive Science Society, May 2018, Frederick Callaway and Falk Lieder contributed equally to this publication. (inproceedings)

Abstract
People's cognitive strategies are jointly shaped by function and computational constraints. Resource-rational analysis leverages these constraints to derive rational models of people's cognitive strategies from the assumption that people make rational use of limited cognitive resources. We present a resource-rational analysis of planning and evaluate its predictions in a newly developed process tracing paradigm. In Experiment 1, we find that a resource-rational planning strategy predicts the process by which people plan more accurately than previous models of planning. Furthermore, in Experiment 2, we find that it also captures how people's planning strategies adapt to the structure of the environment. In addition, our approach allows us to quantify for the first time how close people's planning strategies are to being resource-rational and to characterize in which ways they conform to and deviate from optimal planning.

DOI [BibTex]

DOI [BibTex]


no image
Rational metareasoning and the plasticity of cognitive control

Lieder, F., Shenhav, A., Musslick, S., Griffiths, T. L.

PLOS Computational Biology, 14(4):e1006043, Public Library of Science, April 2018 (article)

Abstract
The human brain has the impressive capacity to adapt how it processes information to high-level goals. While it is known that these cognitive control skills are malleable and can be improved through training, the underlying plasticity mechanisms are not well understood. Here, we develop and evaluate a model of how people learn when to exert cognitive control, which controlled process to use, and how much effort to exert. We derive this model from a general theory according to which the function of cognitive control is to select and configure neural pathways so as to make optimal use of finite time and limited computational resources. The central idea of our Learned Value of Control model is that people use reinforcement learning to predict the value of candidate control signals of different types and intensities based on stimulus features. This model correctly predicts the learning and transfer effects underlying the adaptive control-demanding behavior observed in an experiment on visual attention and four experiments on interference control in Stroop and Flanker paradigms. Moreover, our model explained these findings significantly better than an associative learning model and a Win-Stay Lose-Shift model. Our findings elucidate how learning and experience might shape people’s ability and propensity to adaptively control their minds and behavior. We conclude by predicting under which circumstances these learning mechanisms might lead to self-control failure.

Rational metareasoning and the plasticity of cognitive control DOI Project Page Project Page [BibTex]

Rational metareasoning and the plasticity of cognitive control DOI Project Page Project Page [BibTex]


no image
Over-Representation of Extreme Events in Decision Making Reflects Rational Use of Cognitive Resources

Lieder, F., Griffiths, T. L., Hsu, M.

Psychological Review, 125(1):1-32, January 2018 (article)

Abstract
People’s decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision-makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision-makers should over-represent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people’s availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall.

DOI [BibTex]

DOI [BibTex]


no image
Beyond Bounded Rationality: Reverse-Engineering and Enhancing Human Intelligence

(Glushko Prize 2020)

Lieder, F.

University of California, Berkeley, 2018 (phdthesis)

Abstract
Bad decisions can have devastating consequences: There is a vast body of literature claiming that human judgment and decision-making are riddled with numerous systematic violations of the rules of logic, probability theory, and expected utility theory. The discovery of these cognitive biases in the 1970s (Tversky & Kahneman, 1974) made people question the concept of Homo sapiens as the rational animal, profoundly shaking the foundations of economics and rational models in the cognitive, neural, and social sciences. Four decades later, these disciplines still lack a rigorous theoretical foundation for explaining and remedying people’s cognitive biases. To solve this problem, my dissertation offers a mathematically precise theory of bounded rationality and demonstrates how it can be leveraged to elucidate the cognitive mechanisms of judgment and decision-making (Part 1) and to help people make better decisions (Part 2).

Précis of Beyond Bounded Rationality: Reverse-Engineering and Enhancing Human Intelligence DOI [BibTex]


no image
The Computational Challenges of Pursuing Multiple Goals: Network Structure of Goal Systems Predicts Human Performance

Reichman, D., Lieder, F., Bourgin, D. D., Talmon, N., Griffiths, T. L.

PsyArXiv, 2018 (article)

Abstract
Extant psychological theories attribute people’s failure to achieve their goals primarily to failures of self-control, insufficient motivation, or lacking skills. We develop a complementary theory specifying conditions under which the computational complexity of making the right decisions becomes prohibitive of goal achievement regardless of skill or motivation. We support our theory by predicting human performance from factors determining the computational complexity of selecting the optimal set of means for goal achievement. Following previous theories of goal pursuit, we express the relationship between goals and means as a bipartite graph where edges between means and goals indicate which means can be used to achieve which goals. This allows us to map two computational challenges that arise in goal achievement onto two classic combinatorial optimization problems: Set Cover and Maximum Coverage. While these problems are believed to be computationally intractable on general networks, their solution can be nevertheless efficiently approximated when the structure of the network resembles a tree. Thus, our initial prediction was that people should perform better with goal systems that are more tree-like. In addition, our theory predicted that people’s performance at selecting means should be a U-shaped function of the average number of goals each means is relevant to and the average number of means through which each goal could be accomplished. Here we report on six behavioral experiments which confirmed these predictions. Our results suggest that combinatorial parameters that are instrumental to algorithm design can also be useful for understanding when and why people struggle to pursue their goals effectively.

DOI [BibTex]

2017


no image
Optimal gamification can help people procrastinate less

Lieder, F., Griffiths, T. L.

Annual Meeting of the Society for Judgment and Decision Making, Annual Meeting of the Society for Judgment and Decision Making, November 2017 (conference)

Project Page [BibTex]

2017

Project Page [BibTex]


no image
Strategy selection as rational metareasoning

Lieder, F., Griffiths, T. L.

Psychological Review, 124, pages: 762-794, American Psychological Association, November 2017 (article)

Abstract
Many contemporary accounts of human reasoning assume that the mind is equipped with multiple heuristics that could be deployed to perform a given task. This raises the question of how the mind determines when to use which heuristic. To answer this question, we developed a rational model of strategy selection, based on the theory of rational metareasoning developed in the artificial intelligence literature. According to our model people learn to efficiently choose the strategy with the best cost–benefit tradeoff by learning a predictive model of each strategy’s performance. We found that our model can provide a unifying explanation for classic findings from domains ranging from decision-making to arithmetic by capturing the variability of people’s strategy choices, their dependence on task and context, and their development over time. Systematic model comparisons supported our theory, and 4 new experiments confirmed its distinctive predictions. Our findings suggest that people gradually learn to make increasingly more rational use of fallible heuristics. This perspective reconciles the 2 poles of the debate about human rationality by integrating heuristics and biases with learning and rationality. (APA PsycInfo Database Record (c) 2017 APA, all rights reserved)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Empirical Evidence for Resource-Rational Anchoring and Adjustment

Lieder, F., Griffiths, T. L., Huys, Q. J. M., Goodman, N. D.

Psychonomic Bulletin \& Review, 25, pages: 775-784, Springer, May 2017 (article)

Abstract
People’s estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people’s rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people’s knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
An automatic method for discovering rational heuristics for risky choice

Lieder, F., Krueger, P. M., Griffiths, T. L.

In Proceedings of the 39th Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society, 2017, Falk Lieder and Paul M. Krueger contributed equally to this publication. (inproceedings)

Abstract
What is the optimal way to make a decision given that your time is limited and your cognitive resources are bounded? To answer this question, we formalized the bounded optimal decision process as the solution to a meta-level Markov decision process whose actions are costly computations. We approximated the optimal solution and evaluated its predictions against human choice behavior in the Mouselab paradigm, which is widely used to study decision strategies. Our computational method rediscovered well-known heuristic strategies and the conditions under which they are used, as well as novel heuristics. A Mouselab experiment confirmed our model’s main predictions. These findings are a proof-of-concept that optimal cognitive strategies can be automatically derived as the rational use of finite time and bounded cognitive resources.

Project Page [BibTex]

Project Page [BibTex]


no image
A reward shaping method for promoting metacognitive learning

Lieder, F., Krueger, P. M., Callaway, F., Griffiths, T. L.

In Proceedings of the Third Multidisciplinary Conference on Reinforcement Learning and Decision-Making, 2017 (inproceedings)

Project Page [BibTex]

Project Page [BibTex]


no image
A computerized training program for teaching people how to plan better

Lieder, F., Krueger, P. M., Callaway, F., Griffiths, T. L.

PsyArXiv, 2017 (article)

Project Page [BibTex]

Project Page [BibTex]


no image
When does bounded-optimal metareasoning favor few cognitive systems?

Milli, S., Lieder, F., Griffiths, T. L.

In AAAI Conference on Artificial Intelligence, 31, 2017 (inproceedings)

[BibTex]

[BibTex]


no image
The Structure of Goal Systems Predicts Human Performance

Bourgin, D., Lieder, F., Reichman, D., Talmon, N., Griffiths, T.

In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, 2017 (inproceedings)

[BibTex]

[BibTex]


no image
Learning to (mis) allocate control: maltransfer can lead to self-control failure

Bustamante, L., Lieder, F., Musslick, S., Shenhav, A., Cohen, J.

In The 3rd Multidisciplinary Conference on Reinforcement Learning and Decision Making. Ann Arbor, Michigan, 2017 (inproceedings)

[BibTex]

[BibTex]


no image
Toward a rational and mechanistic account of mental effort

Shenhav, A., Musslick, S., Lieder, F., Kool, W., Griffiths, T., Cohen, J., Botvinick, M.

Annual Review of Neuroscience, 40, pages: 99-124, Annual Reviews, 2017 (article)

Project Page [BibTex]

Project Page [BibTex]


no image
Mouselab-MDP: A new paradigm for tracing how people plan

Callaway, F., Lieder, F., Krueger, P. M., Griffiths, T. L.

In The 3rd multidisciplinary conference on reinforcement learning and decision making, 2017 (inproceedings)

[BibTex]

[BibTex]


no image
Enhancing metacognitive reinforcement learning using reward structures and feedback

Krueger, P. M., Lieder, F., Griffiths, T. L.

In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, 2017 (inproceedings)

Project Page Project Page [BibTex]

Project Page Project Page [BibTex]


no image
The anchoring bias reflects rational use of cognitive resources

Lieder, F., Griffiths, T. L., Huys, Q. J. M., Goodman, N. D.

Psychonomic Bulletin \& Review, 25, pages: 762-794, Springer, 2017 (article)

[BibTex]

[BibTex]


no image
Helping people choose subgoals with sparse pseudo rewards

Callaway, F., Lieder, F., Griffiths, T. L.

In Proceedings of the Third Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2017 (inproceedings)

[BibTex]

[BibTex]

2016


no image
Helping people make better decisions using optimal gamification

Lieder, F., Griffiths, T. L.

In Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016 (inproceedings)

Abstract
Game elements like points and levels are a popular tool to nudge and engage students and customers. Yet, no theory can tell us which incentive structures work and how to design them. Here we connect the practice of gamification to the theory of reward shaping in reinforcement learning. We leverage this connection to develop a method for designing effective incentive structures and delineating when gamification will succeed from when it will fail. We evaluate our method in two behavioral experiments. The results of the first experiment demonstrate that incentive structures designed by our method help people make better, less short-sighted decisions and avoid the pitfalls of less principled approaches. The results of the second experiment illustrate that such incentive structures can be effectively implemented using game elements like points and badges. These results suggest that our method provides a principled way to leverage gamification to help people make better decisions.

link (url) Project Page [BibTex]

2016

link (url) Project Page [BibTex]