Social Learning with Model Misspeciification: A Framework and a Robustness Result

We explore how model misspecification affects long-run learning in a sequential social learning setting. Individuals learn from diverse sources, including private signals, public signals and the actions and outcomes of others. An agent's type specifies her model of the world. Misspecified types have incorrect beliefs about the signal distribution, how other agents draw inference and/or others' preferences. Our main result is a simple criterion to characterize long-run learning outcomes that is straightforward to derive from the primitives of the misspecification. Depending on the nature of the misspecification, we show that learning may be correct, incorrect or beliefs may not converge. Multiple degenerate limit beliefs may arise and agents may asymptotically disagree, despite observing the same sequence of information. We also establish that the correctly specified model is robust - agents with approximately correct models almost surely learn the true state. We close with a demonstration of how our framework can capture three broad categories of model misspecification: strategic misspecification, such as level-k and cognitive hierarchy, signal misspecification, such as partisan bias, and preference misspecification from social perception biases, such as the false consensus effect and pluralistic ignorance. For each case, we illustrate how to calculate the set of asymptotic learning outcomes and derive comparative statics for how this set changes with the parameters of the misspecification.

Download Paper

Paper Number
18-017
Year
2018