A Foundation for Markov Equilibria in Infinite Horizon Perfect Information Games
We study perfect information games with an infinite horizon played by an arbitrary number of players. This class of games includes infinitely repeated perfect information games, repeated games with asynchronous moves, games with long and short run players, games with overlapping generations of players, and canonical non-cooperative models of bargaining. We consider two restrictions on equilibria. An equilibrium is purifiable if close by behavior is consistent with equilibrium when agents' payoffs at each node are perturbed additively and independently. An equilibrium has bounded recall if there exists K such that at most one player's strategy depends on what happened more than K periods earlier. We show that only Markov equilibria have bounded memory and are purifiable. Thus if a game has at most one long-run player, all purifiable equilibria are Markov.