Informational Herding with Model Misspecification

This paper demonstrates that a misspecified model of information processing interferes with long-run learning and offers an explanation for why individuals may continue to choose an inefficient action, despite sufficient public information to learn the true state. I consider a social learning environment where agents draw inference from private signals, public signals and the actions of their predecessors, and sufficient public information exists to achieve asymptotically efficient learning. Prior actions aggregate multiple sources of information; agents face an inferential challenge to distinguish new information from redundant information. I show that when individuals significantly overestimate the amount of new information contained in prior actions, beliefs about the unknown state become entrenched and incorrect learning may occur. On the other hand, when individuals sufficiently overestimate the amount of redundant information, beliefs are fragile and learning is incomplete. When agents have an approximately correct model of inference, learning is complete - the model with no information-processing bias is robust to perturbation.

Download Paper

Paper Number
14-007
Year
2014
Authored by