Paper # Author Title
We study wealth disparities in the formation of anthropometrics, cognitive skills and socio-emotional skills. We use a sample of preschool and early school children in Chile. We extend the previous literature by using longitudinal data, which allow us to study the dynamics of child growth and skills formation. Also, we include information on mother's and father's schooling attainment and mother's cognitive ability. We find that there are no significant anthropometric differences favoring the better-off at birth (and indeed length differences at birth to the disadvantage of the better-off), but during the first 30 months of life wealth disparities in height-for-age z scores (HAZ) favoring the better-off emerge. Moreover, we find wealth disparities in cognitive skills favoring the better-off emerge early in life and continue after children turn 6 years of age. We find no concurrent wealth disparities for and socio-emotional skills. Thus, even though the wealth disparities in birth outcomes if anything favor the poor, significant disparities favoring the rich emerge in the early post-natal period. Mother's education and cognitive ability also are significantly associated with disparities in skill formation. Download Paper
Are nominal prices sticky because menu costs prevent sellers from continuously adjusting their prices to keep up with inflation or because search frictions make sellers indifferent to any real price over some non-degenerate interval? The paper answers the question by developing and calibrating a model in which both search frictions and menu costs may generate price stickiness and sellers are subject to idiosyncratic shocks. The equilibrium of the calibrated model is such that sellers follow a (Q,S,s) pricing rule: each seller lets inflation erode the effective real value of the nominal prices until it reaches some point s and then pays the menu cost and sets a new nominal price with an effective real value drawn from a distribution with support [S,Q], with s < S < Q. Idiosyncratic shocks short-circuit the repricing cycle and may lead to negative price changes. The calibrated model reproduces closely the properties of the empirical price and price-change distributions. The calibrated model implies that search frictions are the main source of nominal price stickiness. Download Paper
Despite the clear success of forecast combination in many economic environments, several important issues remain incompletely resolved. The issues relate to selection of the set of forecasts to combine, and whether some form of additional regularization (e.g., shrinkage) is desirable. Against this background, and also considering the frequently-found superiority of simple-average combinations, we propose LASSO-based procedures that select and shrink toward equal combining weights. We then provide an empirical assessment of the performance of our "egalitarian LASSO" procedures. The results indicate that simple averages are highly competitive, and that although out-of-sample RMSE improvements on simple averages are possible in principle using our methods, they are hard to achieve in real time, due to the intrinsic difficulty of small-sample real-time cross validation of the LASSO tuning parameter. We therefore propose alternative direct combination procedures, most notably \best average" combination, motivated by the structure of egalitarian LASSO and the lessons learned, which do not require choice of a tuning parameter yet outperform simple averages. Download Paper
We argue that political distribution risk is an important driver of aggregate fluctuations. To that end, we document signifucant changes in the capital share after large political events, such as political realignments, modifications in collective bargaining rules, or the end of dictatorships, in a sample of developed and emerging economies. These policy changes are associated with significant fluctuations in output and asset prices. Using a Bayesian proxy-VAR estimated with U.S. data, we show how distribution shocks cause movements in output, unemployment, and sectoral asset prices. To quantify the importance of these political shocks for the U.S. as a whole, we extend an otherwise standard neoclassical growth model. We model political shocks as exogenous changes in the bargaining power of workers in a labor market with search and matching. We calibrate the model to the U.S. corporate non-financial business sector and we back up the evolution of the bargaining power of workers over time using a new methodological approach, the partial filter. We show how the estimated shocks agree with the historical narrative evidence. We document that bargaining shocks account for 34% of aggregate fluctuations. Download Paper
Sovereign bonds are highly divisible, usually of uncertain quality, and auctioned in large lots to a large number of investors. This leads us to assume that no individual bidder can affect the bond price, and to develop a tractable Walrasian theory of Treasury auctions in which investors are asymmetrically informed about the quality of the bond. We characterize the price of the bond for different degrees of asymmetric information, both under discriminatory-price (DP) and uniform-price (UP) protocols. We endogenize information acquisition and show that DP protocols are likely to induce multiple equilibria, one of which features asymmetric information, while UP protocols are unlikely to sustain equilibria with asymmetric information. This result has welfare implications: asymmetric information negatively affects the level, dispersion and volatility of sovereign bond prices, particularly in DP protocols. Download Paper
This paper argues that institutions and political party systems are simultaneously determined. A large change to the institutional framework, such as the creation of the euro by a group of European countries, will realign -after a transition period- the party system as well. The new political landscape may not be compatible with the institutions that triggered it. To illustrate this point, we study the case of the euro and how the party system has evolved in Southern and Northern European countries in response to it. Download Paper
We study stochastic choice as the outcome of deliberate randomization. After first deriving a general representation of a stochastic choice function with such property, we proceed to characterize a model in which the agent has preferences over lotteries that belong to the Cautious Expected Utility class (Cerreia Vioglio et al., 2015), and the stochastic choice is the optimal mix among available options. This model links stochasticity of choice and the phenomenon of Certainty Bias, with both behaviors stemming from the same source: multiple utilities and caution. We show that this model is behaviorally distinct from models of Random Utility, as it typically violates the property of Regularity, shared by all of them. Download Paper
We study theoretically and empirically how consumers in an individual private long-term health insurance market with front-loaded contracts respond to newly mandated portability requirements of their old-age provisions. To foster competition, effective 2009, German legislature made the portability of standardized old-age provisions mandatory. Our theoretical model predicts that the portability reform will increase internal plan switching. However, under plausible assumptions, it will not increase external insurer switching. Moreover, the portability reform will enable unhealthier enrollees to reoptimize their plans. We find confirmatory evidence for the theoretical predictions using claims panel data from a big private insurer. Download Paper
This paper considers infinite-horizon stochastic games with hidden states and hidden actions. The state changes over time, players observe only a noisy public signal about the state each period, and actions are private information. In this model, uncertainty about the monitoring structure does not disappear. We show how to construct an approximately efficient equilibrium in a repeated Cournot game. Then we extend it to a general case and obtain the folk theorem using ex-post equilibria under a mild condition. Download Paper
Since the chance of swaying the outcome of an election by voting is usually very small, it cannot be that voters vote solely for that purpose. So why do we vote? One explanation is that smarter or more educated voters have access to better information about the candidates, and are concerned with appearing to have better information about the candidates through their choice of whether to vote or not. If voting behavior is publicly observed then more educated voters may vote to signal their education, even if the election itself is inconsequential and the cost of voting is the same across voters. I explore this explanation with a model of voting where players are unsure about the importance of swaying the election and high type players receive more precise signals. I introduce a new information ordering, a weakening of Blackwell's order, to formalize the notion of information precision. Once voting has occurred, players visit a labor market and are paid the expected value of their type, conditioning only on their voting behavior. I find that in very large games, voter turnout and the signaling return to voting remains high even though the chance of swaying the election disappears and the cost of voting is the same for all types. I explore generalizations of this model, and close by comparing the stylized features of voter turnout to the features of the model. Download Paper
A fad is something that is popular for a time, then unpopular. For example, in the 1960s tailfins on cars were popular, in the 1970s they were not. I study a model in which fads are driven through the channel of imperfect information. Some players have better information about past actions of other players, and all players have preferences for choosing the same actions as well-informed players. In equilibrium, better informed (high-type) players initially pool on a single action choice. Over time, the low-type players learn which action the high-type players are pooling on, and start to mimic them. Once a tipping point is reached, the high-type players switch to a dfferent action, and the process repeats. I explicitly compute equilibria for a specific parameterization of the model. Low-type players display instrumental preferences for conformity, choosing actions which appear more popular, while high-type players sometimes coordinate on actions which appear unpopular. Improving the quality of information to low-type players does not improve their payoffs, but increases the rate at which high-type players switch between actions. Download Paper
A safe asset’s real value is insulated from shocks, including declines in GDP from rare macroeconomic disasters. However, in a Lucas-tree world, the aggregate risk is given by the process for GDP and cannot be altered by the creation of safe assets. Therefore, in the equilibrium of a representative-agent version of this economy, the quantity of safe assets will be nil. With heterogeneity in coefficients of relative risk aversion, safe assets can take the form of private bond issues from low-risk-aversion to high-risk-aversion agents. The model assumes Epstein-Zin/Weil preferences with common values of the intertemporal elasticity of substitution and the rate of time preference. The model achieves stationarity by allowing for random shifts in coefficients of relative risk aversion. We derive the equilibrium values of the ratio of safe to total assets, the shares of each agent in equity ownership and wealth, and the rates of return on safe and risky assets. In a baseline case, the steady-state risk-free rate is 1.0% per year, the unlevered equity premium is 4.2%, and the quantity of safe assets ranges up to 15% of economy-wide assets (comprising the capitalized value of GDP). A disaster shock leads to an extended period in which the share of wealth held by the low-risk-averse agent and the risk-free rate are low but rising, and the ratio of safe to total assets is high but falling. In the baseline model, Ricardian Equivalence holds in that added government bonds have no effect on rates of return and the net quantity of safe assets. Surprisingly, the crowding-out coefficient for private bonds with respect to public bonds is not 0 or -1 but around -0.5, a value found in some existing empirical studies. Download Paper
We explore model misspecification in an observational learning framework. Individuals learn from private and public signals and the actions of others. An agent's type specifies her model of the world. Misspecified types have incorrect beliefs about the signal distribution, how other agents draw inference and/or others' payoffs. We establish that the correctly specified model is robust in that agents with approximately correct models almost surely learn the true state asymptotically. We develop a simple criterion to identify the asymptotic learning outcomes that arise when misspecification is more severe. Depending on the nature of the misspecification, learning may be correct, incorrect or beliefs may not converge. Different types may asymptotically disagree, despite observing the same sequence of information. This framework captures behavioral biases such as confirmation bias, false consensus effect, partisan bias and correlation neglect, as well as models of inference such as level-k and cognitive hierarchy. Download Paper
This paper constructs individual-specific density forecasts for a panel of firms or households using a dynamic linear model with common and heterogeneous coeficients and cross-sectional heteroskedasticity. The panel considered in this paper features large cross-sectional dimension (N) but short time series (T). Due to short T, traditional methods have difficulty in disentanglingthe heterogeneous parameters from the shocks, which contaminates the estimates of the heterogeneous parameters. To tackle this problem, I assume that there is an underlying distribution of heterogeneous parameters, model this distribution nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors, and then estimate this distribution by pooling the information from the whole cross-section together. I develop a simulation-based posterior sampling algorithm specifically addressing the nonparametric density estimation of unobserved heterogeneous parameters. I prove that both the estimated common parameters and the estimated distribution of the heterogeneous parameters achieve posterior consistency, and that the density forecasts asymptotically converge to the oracle forecast, an (infeasible) benchmark that is defined as the individual-specific posterior predictive distribution under the assumption that the common parameters and the distribution of the heterogeneous parameters are known. Monte Carlo simulations demonstrate improvements in density forecasts relative to alternative approaches. An application to young firm dynamics also shows that the proposed predictor provides more accurate density predictions. Download Paper
We analyze how the life settlement market - the secondary market for life insurance - may affect consumer welfare in a dynamic equilibrium model of life insurance with one-sided commitment and overconfident policyholders. As in Daily et al. (2008) and Fang and Kung (2010), policyholders may lapse their life insurance policies when they lose their bequest motives; but in our model the policyholders may underestimate their probability of losing their bequest motive, or be overconfident about their future mortality risks. For the case of overconfidence with respect to bequest motives, we show that in the absence of life settlement overconfident consumers may buy too much" reclassiffication risk insurance for later periods in the competitive equilibrium. In contrast, when consumers are overconfident about their future mortality rates in the sense that they put too high a subjective probability on the low-mortality state, the competitive equilibrium contract in the absence of life settlement exploits the consumer bias by offering them very high face amounts only in the low-mortality state. In both cases, life settlement market can impose a discipline on the extent to which overconfident consumers can be exploited by the primary insurers. We show that life settlement may increase the equilibrium consumer welfare of overconfident consumers when they are sufficiently vulnerable in the sense that they have a sufficiently large intertemporal elasticity of substitution of consumption. Download Paper
Received auction theory prescribes that a reserve price which maximizes expected profit should be no less than the seller's own value for the auctioned object. In contrast, a common empirical observation is that many auctions have reserve prices set below seller's values, even at zero. This paper revisits the theory to find a potential resolution of the puzzle for second-price auctions. The main result is that an optimal reserve price may be less than the seller's value if bidders are risk averse and have interdependent values. Moreover, the resulting outcome may be arbitrarily close to that of an auction that has no reserve price, an absolute auction. Download Paper
We use variance decompositions from high-dimensional vector autoregressions to characterize connectedness in 19 key commodity return volatilities, 2011-2016. We study both static (full-sample) and dynamic (rolling-sample) connectedness. We summarize and visualize the results using tools from network analysis. The results reveal clear clustering of commodities into groups that match traditional industry groupings, but with some notable differences. The energy sector is most important in terms of sending shocks to others, and energy, industrial metals, and precious metals are themselves tightly connected. Download Paper
Quantitative analysis of a New Keynesian model with the Bernanke-Gertler accelerator and risk shocks shows that violations of Tinbergen’s Rule and strategic interaction between policy-making authorities undermine significantly the effectiveness of monetary and financial policies. Separate monetary and financial policy rules, with the latter subsidizing lenders to encourage lending when credit spreads rise, produce higher welfare and smoother business cycles than a monetary rule augmented with credit spreads. The latter yields a tight money-tight credit regime in which the interest rate responds too much to inflation and not enough to adverse credit conditions. Reaction curves for the choice of policy-rule elasticity that minimizes each authority’s loss function given the other authority’s elasticity are nonlinear, reflecting shifts from strategic substitutes to complements in setting policy-rule parameters. The Nash equilibrium is significantly inferior to the Cooperative equilibrium, both are inferior to a first-best outcome that maximizes welfare, and both produce tight money-tight credit regimes. Download Paper
A principal wishes to distribute an indivisible good to a population of budget-constrained agents. Both valuation and budget are an agent’s private information. The principal can inspect an agent’s budget through a costly verification process and punish an agent who makes a false statement. I characterize the direct surplus-maximizing mechanism. This direct mechanism can be implemented by a two-stage mechanism in which agents only report their budgets. Specifically, all agents report their budgets in the first stage. The principal then provides budget dependent cash subsidies to agents and assigns the goods randomly (with uniform probability) at budget-dependent prices. In the second stage, a resale market opens, but is regulated with budget-dependent sales taxes. Agents who report low budgets receive more subsidies in their initial purchases (the first stage), face higher taxes in the resale market (the second stage) and are inspected randomly. This implementation exhibits some of the features of some welfare programs, such as Singapore’s housing and development board. Download Paper
What do we know about the economic consequences of labor market regulations? Few economic policy questions are as contentious as labor market regulations. The effects of minimum wages, collective bargaining provisions, and hiring/firing restrictions generate heated debates in the U.S. and other advanced economies. And yet, establishing empirical lessons about the consequences of these regulations is surprisingly difficult. In this paper, I explain some of the reasons why this is the case, and I critically review the recent findings regarding the effects of minimum wages on employment. Contrary to often asserted statements, the preponderance of the evidence still points toward a negative impact of permanently high minimum wages. Download Paper
This paper develops a theory of asset intermediation as a pure rent extraction activity. Agents meet bilaterally in a random fashion. Agents differ with respect to their valuation of the asset's dividends and with respect to their ability to commit to take-it-or-leave-it offers. In equilibrium, agents with commitment behave as intermediaries, while agents without commitment behave as end users. Agents with commitment intermediate the asset market only because they can extract more of the gains from trade when reselling or repurchasing the asset. We study the extent of intermediation as a rent extraction activity by examining the agent's decision to invest in a technology that gives them commitment. We find that multiple equilibria may emerge, with different levels of intermediation and with lower welfare in equilibria with more intermediation. We find that a decline in trading frictions leads to more intermediation and typically lower welfare, and so does a decline in the opportunity cost of acquiring commitment. A transaction tax can restore efficiency. Download Paper
This paper formalizes the optimal design of randomized controlled trials (RCTs) in the presence of interference between units, where an individual's outcome depends on the behavior and outcomes of others in her group. We focus on randomized saturation (RS) designs, which are two-stage RCTs that first randomize the treatment saturation of a group, then randomize individual treatment assignment. Our main contributions are to map the potential outcomes framework with partial interference to a regression model with clustered errors, calculate the statistical power of different RS designs, and derive analytical insights for how to optimally design an RS experiment. We show that the power to detect average treatment effects declines precisely with the ability to identify novel treatment and spillover estimands, such as how effects vary with the intensity of treatment. We provide software that assists researchers in designing RS experiments. Download Paper
This paper studies how persistence can be used to create incentives in a continuous-time stochastic game in which a long-run player interacts with a se- quence of short-run players. Observation of the long-run player's actions are distorted by a Brownian motion and the actions of both players impact future payoffs through a state variable. For example, a firm or worker provides customers with a product, and the quality of this product depends on both current and past investment choices by the firm. I derive general conditions under which a Markov equilibrium emerges as the unique perfect public equilibrium, and char- acterize the equilibrium payoff and actions in this equilibrium, for any discount rate. I develop an application of persistent product quality to illustrate how per- sistence creates effective intertemporal incentives in a setting where traditional channels fail, and explore how the structure of persistence impacts equilibrium behavior. This demonstrates the power of the continuous-time setting to deliver sharp insights and a tractable equilibrium characterization for a rich class of dynamic games. Download Paper
A firm employs workers to obtain costly unverifiable information - for example, categorizing the content of images. Workers are monitored by comparing their messages. The optimal contract under limited liability exhibits three key features: (i) the monitoring technology depends crucially on the commitment power of the firm - virtual monitoring, or monitoring with arbitrarily small probability, is optimal when the firm can commit to truthfully reveal messages from other workers, while monitoring with strictly positive probability is optimal when the firm can hide messages (partial commitment), (ii) bundling multiple tasks reduces worker rents and monitoring inefficiencies; and (iii) the optimal contract is approximately efficient under full but not partial commitment. We conclude with an application to crowdsourcing platforms, and characterize the optimal contract for tasks found on these platforms. Download Paper
This paper considers the problem of forecasting a collection of short time series using cross sectional information in panel data. We construct point predictors using Tweedie's formula for the posterior mean of heterogeneous coefficients under a correlated random effects distribution. This formula utilizes cross-sectional information to transform the unit-specific (quasi) maximum likelihood estimator into an approximation of the posterior mean under a prior distribution that equals the population distribution of the random coefficients. We show that the risk of a predictor based on a non-parametric estimate of the Tweedie correction is asymptotically equivalent to the risk of a predictor that treats the correlated-random-effects distribution as known (ratio-optimality). Our empirical Bayes predictor performs well compared to various competitors in a Monte Carlo study. In an empirical application we use the predictor to forecast revenues for a large panel of bank holding companies and compare forecasts that condition on actual and severely adverse macroeconomic conditions. Download Paper
Using data from the Employment Opportunity Pilot Project, we examine the relationship between the starting wage paid to the worker filling a vacancy, the number of applications attracted by the vacancy, the number of candidates interviewed for the vacancy, and the duration of the vacancy. We find that the wage is positively related to the duration of a vacancy and negatively related to the number of applications and interviews per week. We show that these surprising findings are consistent with a view of the labor market in which firms post wages and workers direct their search based on these wages if workers and jobs are heterogeneous and the interaction between worker’s type and job’s type in production satisfies some rather natural assumptions. Download Paper
Macroprudential policy holds the promise of becoming a powerful tool for preventing financial crises. Financial amplification in response to domestic shocks or global spillovers and pecuniary externalities caused by Fisherian collateral constraints provide a sound theoretical foundation for this policy. Quantitative studies show that models with these constraints replicate key stylized facts of financial crises, and that the optimal financial policy of an ideal constrained-efficient social planner reduces sharply the magnitude and frequency of crises. Research also shows, however, that implementing effective macroprudential policy still faces serious hurdles. This paper highlights three of them: (i) complexity, because the optimal policy responds widely and non-linearly to movements in both domestic factors and global spillovers due to regime shifts in global liquidity, news about global fundamentals, and recurrent innovation and regulatory changes in world markets, (ii) lack of credibility, because of time-inconsistency of the optimal policy under commitment, and (iii) coordination failure, because a careful balance with monetary policy is needed to avoid quantitatively large inefficiencies resulting from violations of Tinbergen’s rule or strategic interaction between monetary and financial authorities. Download Paper
Infrequent but turbulent episodes of outright sovereign default on domestic creditors are considered a “forgotten history” in Macroeconomics. We propose a heterogeneous-agents model in which optimal debt and default on domestic and foreign creditors are driven by distributional incentives and endogenous default costs due to value of debt for self-insurance, liquidity and risk-sharing. The government’s aim to redistribute resources across agents and through time in response to uninsurable shocks produces a rich dynamic feedback mechanism linking debt issuance, the distribution of government bond holdings, the default decision, and risk premia. Calibrated to Spanish data, the model is consistent with key cyclical co-movements and features of debt-crisis dynamics. Debt exhibits protracted fluctuations. Defaults have a low frequency of 0.93 percent, are preceded by surging debt and spreads, and occur with relatively low external debt. Default risk limits the sustainable debt and yet spreads are zero most of the time. Download Paper
A law prohibiting a particular behavior does not directly change the payoff to an individual should he engage in the prohibited behavior. Rather, any change in the individual’s payoff, should he engage in the prohibited behavior, is a consequence of changes in other peoples’ behavior. If laws do not directly change payoffs, they are “cheap talk,” and can only affect behavior because people have coordinated beliefs about the effects of the law. Beginning from this point of view, we provide definitions of authority in a variety of problems, and investigate how and when individuals can have, gain, and lose authority. Download Paper
The accuracy of particle filters for nonlinear state-space models crucially depends on the proposal distribution that mutates time t − 1 particle values into time t values. In the widely-used bootstrap particle filter this distribution is generated by the state- transition equation. While straightforward to implement, the practical performance is often poor. We develop a self-tuning particle filter in which the proposal distribution is constructed adaptively through a sequence of Monte Carlo steps. Intuitively, we start from a measurement error distribution with an inflated variance, and then gradually reduce the variance to its nominal level in a sequence of steps that we call tempering. We show that the filter generates an unbiased and consistent approximation of the likelihood function. Holding the run time fixed, our filter is substantially more accurate in two DSGE model applications than the bootstrap particle filter. Download Paper
A large empirical literature found that the correlation between insurance purchase and ex post realization of risk is often statistically insignificant or negative. This is inconsistent with the predictions from the classic models of insurance a la Akerlof (1970), Pauly (1974) and Rothschild and Stiglitz (1976) where consumers have one-dimensional heterogeneity in their risk types. It is suggested that selection based on multidimensional private information, e.g., risk and risk preference types, may be able to explain the empirical findings. In this paper, we systematically investigate whether selection based on multidimensional private information in risk and risk preferences, can, under different market structures, result in a negative correlation in equilibrium between insurance coverage and ex post realization of risk. We show that if the insurance market is perfectly competitive, selection based on multidimensional private information does not result in negative correlation property in equilibrium, unless there is a sufficiently high loading factor. If the insurance market is monopolistic or imperfectly competitive, however, we show that it is possible to generate negative correlation property in equilibrium when risk and risk preference types are sufficiently negative dependent, a notion we formalize using the concept of copula. We also clarify the connections between some of the important concepts such as adverse/advantageous selection and positive/negative correlation property. Download Paper
Individuals often tend to conform to the choices of others in group decisions, compared to choices made in isolation, giving rise to phenomena such as group polarization and the bandwagon effect. We show that this behavior, which we term the consensus effect, is equivalent to a well-known violation of expected utility, namely strict quasi-convexity of preferences. In contrast to the equilibrium outcome when individuals are expected utility maximizers, quasi-convexity of preferences imply that group decisions may fail to properly aggregate preferences and strictly Pareto-dominated equilibria may arise. Moreover, these problems become more severe as the size of the group grows. Download Paper
I consider a model in which a firm invests in both product quality and in a costly signaling technology, and the firm's reputation is the market's belief that its quality is high. The firm influences the rate at which consumers receive information about quality: the firm can either promote, which increases the arrival rate of signals when quality is high, or censor, which decreases the arrival rate of signals when quality is low. I study how the firm's incentives to build quality and signal depend on its reputation and current quality. The firm's ability to promote or censor plays a key role in the structure of equilibria. Promotion and investment in quality are complements: the firm has stronger incentives to build quality when the promotion level is high. Costly promotion can, however, reduce the firm's incentive to build quality; this effect persists even as the cost of building quality approaches zero. Censorship and investment in quality are substitutes. The ability to censor can destroy a firm's incentives to invest in quality, because it can reduce information about poor quality products. Download Paper
We study dynamic moral hazard with symmetric ex ante uncertainty about the difficulty of the job. The principal and agent update their beliefs about the difficulty as they observe output. Effort is private and the principal can only offer spot contracts. The agent has an additional incentive to shirk beyond the disutility of effort when the principal induces effort: shirking results in the principal having incorrect beliefs. We show that the effort inducing contract must provide increasingly high powered incentives as the length of the relationship increases. Thus it is never optimal to always induce effort in very long relationships. Download Paper
This paper studies infinite-horizon stochastic games in which players observe payoffs and noisy public information about a hidden state each period.We find that, very generally, the feasible and individually rational payoff set is invariant to the initial prior about the state in the limit as the discount factor goes to one. This result ensures that players can punish or reward the opponents via continuation payoffs in a flexible way. Then we prove the folk theorem, assuming that public randomization is available. The proof is constructive, and introduces the idea of random blocks to design an effective punishment mechanism. Download Paper