Paper # Author Title
Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. Will the agents commonly learn the value of the parameter, i.e., will the true value of the parameter become approximate common-knowledge? If the signals are independent and identically distributed across time (but not necessarily across agents), the answer is yes (Cripps, Ely, Mailath, and Samuelson, 2008). This paper explores the implications of allowing the signals to be dependent over time. We present a counterexample showing that even extremely simple time dependence can preclude common learning, and present sufficient conditions for common learning. Download Paper
Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals.  The signals are independent and identically distributed across time but not necessarily across agents.  We show that that when each agent's signal space is finite, the agents will commonly learn its value, i.e., that the true value of the parameter will become approximate common-knowledge. In contrast, if the agents' observations come from a countably infinite signal space, then this contraction mapping property fails.  We show by example that common learning can fail in this case. Download Paper