Learning with Heterogeneous Misspeciﬁed Models: Characterization and Robustness
This paper develops a general framework to study how misinterpreting information impacts learning. Our main result is a simple criterion to characterize long-run beliefs based on the underlying form of misspeciﬁcation. We present this characterization in the context of social learning, then highlight how it applies to other learning environ-ments, including individual learning. A key contribution is that our characterization applies to settings with model heterogeneity and provides conditions for entrenched disagreement. Our characterization can be used to determine whether a representative agent approach is valid in the face of heterogeneity, study how diﬀering levels of bias or unawareness of others’ biases impact learning, and explore whether the impact of a bias is sensitive to parametric speciﬁcation or the source of information. This uniﬁed framework synthesizes insights gleaned from previously studied forms of misspeciﬁcation and provides novel insights in speciﬁc applications, as we demonstrate in settings with partisan bias, overreaction, naive learning, and level-k reasoning.