Behavioral Foundations of Model Misspecification
We link two approaches to biased belief formation: non-Bayesian updating rules and model misspecification. Each approach has advantages: updating rules
transparently capture the underlying bias and are identifiable from belief data;
misspecified models are `complete' and amenable to general analysis. We show
that misspecified models can be decomposed into an updating rule and forecast of anticipated future beliefs. We derive necessary and sufficient conditions for an updating rule and forecast to have a misspecified model representation, show the representation is unique, and construct it. This highlights the belief restrictions implicit in the misspecified model approach. Finally, we explore two ways to elect forecasts| introspection-proof and naive consistent and derive when a representation of each exists.
transparently capture the underlying bias and are identifiable from belief data;
misspecified models are `complete' and amenable to general analysis. We show
that misspecified models can be decomposed into an updating rule and forecast of anticipated future beliefs. We derive necessary and sufficient conditions for an updating rule and forecast to have a misspecified model representation, show the representation is unique, and construct it. This highlights the belief restrictions implicit in the misspecified model approach. Finally, we explore two ways to elect forecasts| introspection-proof and naive consistent and derive when a representation of each exists.