Header logo is

Burn-in, bias, and the rationality of anchoring




Bayesian inference provides a unifying framework for addressing problems in machine learning, artificial intelligence, and robotics, as well as the problems facing the human mind. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide range of time-accuracy tradeoffs, but what is the optimal tradeoff? We investigate time-accuracy tradeoffs using the Metropolis-Hastings algorithm as a metaphor for the mind's inference algorithm(s). We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as burn-in. Therefore the strategy that is optimal subject to the mind's bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoring-and-adjustment heuristic. The model's quantitative predictions are tested against published data on anchoring in numerical estimation tasks. Our theoretical and empirical results suggest that the anchoring bias is consistent with approximate Bayesian inference.

Author(s): Falk Lieder and Thomas L. Griffiths and Noah D. Goodman
Journal: Advances in Neural Information Processing Systems 25
Pages: 2699--2707
Year: 2012

Department(s): Rationality Enhancement
Bibtex Type: Article (article)

State: Published
URL: https://papers.nips.cc/paper/4719-burn-in-bias-and-the-rationality-of-anchoring


  title = {Burn-in, bias, and the rationality of anchoring},
  author = {Lieder, Falk and Griffiths, Thomas L. and Goodman, Noah D.},
  journal = {Advances in Neural Information Processing Systems 25},
  pages = {2699--2707},
  year = {2012},
  url = {https://papers.nips.cc/paper/4719-burn-in-bias-and-the-rationality-of-anchoring}