Non-Bayesian Social Learning
We develop a dynamic model of opinion formation in social networks. Relevant information is spread throughout the network in such a way that no agent has enough data to learn a payoff-relevant parameter. Individuals engage in communication with their neighbors in order to learn from their experiences. However, instead of incorporating the views of their neighbors in a fully Bayesian manner, agents use a simple updating rule which linearly combines their personal experience and the views of their neighbors (even though the neighbors views may be quite inaccurate). This non-Bayesian learning rule is motivated by the formidable complexity required to fully implement Bayesian updating in networks. We show that, under mild assumptions, repeated interactions lead agents to successfully aggregate information and to learn the true underlying state of the world. This result holds in spite of the apparent navite of agents updating rule, the agents need for information from sources (i.e., other agents) the existence of which they may not be aware of, the possibility that the most persuasive agents in the network are precisely those least informed and with worst prior views, and the assumption that no agent can tell whether their own views or their neighbors views are more accurate.