Common Learning with Intertemporal Dependence
Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. Will the agents commonly learn the value of the parameter, i.e., will the true value of the parameter become approximate common-knowledge? If the signals are independent and identically distributed across time (but not necessarily across agents), the answer is yes (Cripps, Ely, Mailath, and Samuelson, 2008). This paper explores the implications of allowing the signals to be dependent over time. We present a counterexample showing that even extremely simple time dependence can preclude common learning, and present sufficient conditions for common learning.