Working Papers
By Year:
Paper #  Author  Title  

13030 
Francesco Bianchi Leonardo Melosi 
"Modeling the Evolution of Expectations and Uncertainty in General Equilibrium"  
This paper develops methods to study the evolution of agents’ expectations and uncertainty in
general equilibrium models. A central insight consists of recognizing that the evolution of agents.
beliefs can be captured by defining a set of regimes that are characterized by the degree of agents.
pessimism, optimism, and uncertainty about future equilibrium outcomes. Once this kind of structure
is imposed, it is possible to create a mapping between the evolution of agents’ beliefs and observable
outcomes. Agents in the model are fully rational, conduct Bayesian learning, and they know that they
do not know. Therefore, agents form expectations taking into account that their beliefs will evolve
according to what they observe in the future. The new modeling framework accommodates both
gradual and abrupt changes in agents’ beliefs and allows an analytical characterization of uncertainty.
Shocks to beliefs are shown to have both .rstorder and secondorder effects. To illustrate how to
apply the methods, we use a prototypical Real Business Cycle model in which households form beliefs
about the likely duration of highgrowth and lowgrowth regimes. Download Paper


13029 
Leonardo Melosi 
"Signaling Effects of Monetary Policy"  
We develop a DSGE model in which the policy rate signals to price setters the central bank’s view about macroeconomic developments. The model is estimated with likelihood methods on a U.S. data set that includes the Survey of Professional Forecasters as a measure of price setters’ inflation expectations. We find that the model fits the data better than a prototypical New Keynesian DSGE model because the signaling effects of monetary policy help the model account for the runup in inflation expectations in the 1970s. The estimated model with signaling effects delivers large and persistent real effects of monetary disturbances even though the average duration of price contracts is fairly short. While the signaling effects do not substantially alter the transmission of technology shocks, they bring about deflationary pressures in the aftermath of positive demand shocks. The
signaling effects of monetary policy have contributed (i ) to heightening inflation expectations in the 1970s, (ii ) to raising inflation and to exacerbating the recession during the first years of Volcker’s monetary tightening, and (iii ) to subduing inflation and to stimulating economic activity from 1991 through 2007. Download Paper


13028 
Qingmin Liu George J. Mailath Andrew Postlewaite Larry Samuelson 
"Stable Matching with Incomplete Information, Second Version"  
We formulate a notion of stable outcomes in matching problems with onesided asymmetric information. The key conceptual problem is to formulate a notion of a blocking pair that takes account of the inferences that the uninformed agent might make. We show that the set of stable outcomes is nonempty in incompleteinformation environments, and is a superset of the set of completeinformation stable outcomes. We then provide sufficient conditions for incompleteinformation stable matchings to be efficient. Lastly, we define a notion of pricesustainable allocations and show that the set of incompleteinformation stable matchings is a subset of the set of such allocations. Download Paper


13027 
Uriel Spiegel 
"Are All Technological Improvements Beneficial? Absolutely Not"  
This paper shows, using a simple model, that wasteful innovations may result in a lossloss situation where no country experiences an increase in welfare. If some countries introduce innovations that result in harmful effects on other countries, it may cause the adversely affected countries to retaliate by imposing impediments to international trade. In a globalized and integrated World economy, such policies can only harm the countries involved. Thus, it is in both countries' best interest to encourage sustainable coordination between policies in order to better their own citizens, as well as the World's aggregate welfare Download Paper


13026 
Philippe Aghion Ufuk Akcigit Peter Howitt 
"What Do We Learn From Schumpeterian Growth Theory?"  
Schumpeterian growth theory has .operationalized. Schumpeter’s notion of creative destruction by developing models based on this concept. These models shed light on several aspects of the growth process that could not be properly addressed by alternative theories. In this survey, we focus on four important aspects, namely: (i) the role of competition and market structure; (ii) firm dynamics; (iii) the relationship between growth and development with the notion of appropriate growth institutions; and (iv) the emergence and impact of longterm technological waves. In each case Schumpeterian growth theory delivers predictions that distinguish it from other growth models and which can be tested using micro data. Download Paper


13025 
Philippe Aghion Ufuk Akcigit Jesus FernandezVillaverde 
"Optimal Capital Versus Labor Taxation with InnovationLed Growth"  
Chamley (1986) and Judd (1985) showed that, in a standard neoclassical growth model with capital accumulation and infinitely lived agents, either taxing or subsidizing capital cannot be optimal in the steady state. In this paper, we introduce innovationled growth into the ChamleyJudd framework, using a Schumpeterian growth model where productivityenhancing innovations result from pro.tmotivated R&D investment. Our main result is that, for a given required trend of public expenditure, a zero tax/subsidy on capital becomes suboptimal. In particular, the higher the level of public expenditure and the income elasticity of labor supply, the less should capital income be subsidized and the more it should be taxed. Not taxing capital implies that labor must be taxed at a higher rate. This in turn has a detrimental effect on labor supply and therefore on the market size for innovation. At the same time, for a given labor supply, taxing capital also reduces innovation incentives, so that for low levels of public expenditure and/or labor supply elasticity it becomes optimal to subsidize capital income. Download Paper


13024 
Kenneth Burdett Guido Menzio 
"(Q, S, s) Pricing Rules"  
We study the effect of menu costs on the pricing behavior of sellers and on the crosssectional distribution of prices in the searchtheoretic model of imperfect competition of Burdett and Judd (1983). We find that, when menu costs are small, the equilibrium is such that sellers follow a (Q, S, s) pricing rule. According to a (Q, S, s) rule, a seller lets inflation erode the real value of its nominal price until it reaches some point s. Then, the seller pays the menu cost and changes its nominal price so that the real value of the new price is randomly drawn from a distribution with support [S,Q], where Q is the buyer’s reservation price and S is some price between s and Q. Only when the menu cost is relatively large, the equilibrium is such that sellers follow a standard (S; s) pricing rule. We argue that whether sellers follow a (Q, S, s) or an (S, s) rule matters for the estimation of menu costs and sellerspecific shocks. Download Paper


13023 
Fei Li 
"Efficient Learning and Job Turnover in the Labor Market"  
This paper studies the dynamics of workers’ onthejob search behavior and its consequences in an equilibrium labor market. In a model with both directed search and learning about the match quality of firmworker pairs, I highlight the job search target effect of learning: as a worker updates the evaluation of his current job, he adjusts his onthejob search target, which results in a different job finding rate. This model generates a nonmonotonic relation between the employmenttoemployment transition rate and tenure, which provides a new explanation of the humpshaped separation ratetenure profile. Download Paper


13022 
Olivier Compte Andrew Postlewaite 
"Folk Theorems, Second Version"  
Much of the repeated game literature is concerned with proving Folk Theorems. The logic of the exercise is to specify a particular game, and to explore for that game specification whether any given feasible (and individually rational) value vector can be an equilibrium outcome for some strategies when agents are sufficiently patient. A game specification includes a description of what agents observe at each stage. This is done by defining a monitoring structure, that is, a collection of probability distributions over the signals players receive (one distribution for each action profile players may play). Although this is simply meant to capture the fact that players don’t directly observe the actions chosen by others, constructed equilibria often depend on players precisely knowing these distributions, somewhat unrealistic in most problems of interest. We revisit the classic Folk Theorem for games with imperfect public monitoring, asking that incentive conditions hold not only for a precisely defined monitoring structure, but also for
a ball of monitoring structures containing it. We show that efficiency and incentives are no longer compatible. Download Paper


13021 
Songnian Chen Shakeeb Khan Xun Tang 
"Informational Content of Special Regressors in Heteroskedastic Binary"  
We quantify the identifying power of special regressors in heteroskedastic binary regressions with medianindependent or conditionally symmetric errors. We measure the
identifying power using two criteria: the set of regressor values that help point identify coefficients in latent payoffs as in (Manski 1988); and the Fisher information of coefficients as in (Chamberlain 1986). We find for medianindependent errors, requiring one of the regressors to be “special" (in a sense similar to (Lewbel 2000)) does not add to the identifying power or the information for coefficients. Nonetheless it does help identify the error distribution and the average structural function. For conditionally symmetric errors, the presence of a special regressor improves the identifying power by the criterion in (Manski 1988), and the Fisher information for coefficients is strictly positive under mild conditions. We propose a new estimator for coefficients that converges at the parametric rate under symmetric errors and a special regressor, and report its decent performance in small samples through simulations. Download Paper


13020 
Olivier Compte Andrew Postlewaite 
"Belief free equilibria"  
The repeated game literature studies long run/repeated interactions, aiming to understand how repetition may foster cooperation. Conditioning future behavior on past play is crucial in this endeavor. For most situations of interest a given player does not directly observe the actions chosen by other players and must rely on noisy signals he receives about those actions. This is typically incorporated into models by defining a monitoring structure, that is, a collection of probability distributions over the signals each player receives (one distribution for each action profile players may play). Although this is simply meant to capture the fact that players don.t directly observe the actions chosen by others, constructed equilibria often depend on players precisely knowing the distributions, somewhat unrealistic in most problems of interest. This paper aims to show the fragility of belief free equilibrium constructions when one adds shocks to the monitoring structure in repeated games. Download Paper


13019 
Rong Hai 
"The Determinants of Rising Inequality in Health Insurance and Wages: An Equilibrium Model of Workers' Compensation and Health Care Policies"  
I develop and structurally estimate a nonstationary overlapping generations equilibrium model of employment and workers' health insurance and wage compensation, to investigate the determinants of rising inequality in health insurance and wages in the U.S. over the last 30
years. I find that skillbiased technological change and the rising cost of medical care services are the two most important determinants, while the impact of Medicaid eligibility expansion is quantitatively small. I conduct counterfactual policy experiments to analyze key features
of the 2010 Patient Protection and Affordable Care Act, including employer mandates and further Medicaid eligibility expansion. I find that (i) an employer mandate reduces both wage and health insurance coverage inequality, but also lowers the employment rate of less educated
individuals; and (ii) further Medicaid eligibility expansion increases employment rate of less educated individuals, reduces health insurance coverage disparity, but also causes larger wage inequality. Download Paper


13018 
Daron Acemoglu Ufuk Akcigit Nicholas Bloom William Kerr 
"Innovation, Reallocation and Growth"  
We build a model of firmlevel innovation, productivity growth and reallocation featuring endogenous entry and exit. A key feature is the selection between high and lowtype firms, which differ in terms of their innovative capacity. We estimate the parameters of the model using detailed US Census micro data on firmlevel output, R&D and patenting. The model provides a good fit to the dynamics of firm entry and exit, output and R&D, and its implied elasticities are in the ballpark of a range of micro estimates. We find industrial policy subsidizing either the R&D or the continued operation of incumbents reduces growth and welfare. For example, a subsidy to incumbent R&D equivalent to 5% of GDP reduces welfare by about 1.5% because it deters entry of new hightype firms. 0n the contrary, substantial improvements (of the order of 5% improvement in welfare) are possible if the continued operation of incumbents is taxed while at the same time R&D by incumbents and new entrants is subsidized. This is because of a strong selection effect: R&D resources (skilled labor) are inefficiently used by lowtype incumbent firms. Subsidies to incumbents encourage the survival and expansion of these firms at the expense of potential hightype entrants. We show that optimal policy encourages the exit of lowtype firms and supports R&D by hightype incumbents and entry. Download Paper


13017 
Olivier Compte Andrew Postlewaite 
"Auctions", Second Version  
Standard Bayesian models assume agents know and fully exploit prior distributions over types. We are interested in modeling agents who lack detailed knowledge of prior distributions. In auctions, that agents know priors has two consequences: (i) signals about own valuation come with precise inference about signals received by others; (ii) noisier estimates translate into more weight put on priors. We revisit classic questions in auction theory, exploring environments in which no such complex inferences are precluded. This is done in a parsimonious model of auctions in which agents are restricted to using simple strategies. Download Paper


13016 
S. Boragan Aruoba Francis X. Diebold Jeremy Nalewaik Frank Schorfheide Dongho Song 
"Improving GDP Measurement: A MeasurementError Perspective"  
We provide a new and superior measure of U.S. GDP, obtained by applying optimal signalextraction techniques to the (noisy) expenditureside and incomeside estimates. Its properties  particularly as regards serial correlation  differ markedly from those of the standard expenditureside measure and lead to substantiallyrevised views regarding the properties of GDP. Download Paper


13015 
Boleslaw Borkowski Monika Krawiec 
"Modeling and Estimating Volatility of Options on Standard & Poor's 500 Index"  
This paper explores the impact of volatility estimation methods on theoretical option values based upon the BlackScholesMerton (BSM) model. Volatility is the only input used in the BSM model that cannot be observed in the market or a priori determined in a contract. Thus, properly calculating volatility is crucial. Two approaches to estimate volatility are implied volatility and historical prices. Iterative techniques are applied, based on daily S&P index options. Additionally, using option data on S&P 500 Index listed on the Chicago Board of Options Exchange, historical volatility can be estimated. Download Paper


13014 
David Dillenberger Juan Sebastian Lleras Philipp Sadowski Norio Takeoka 
“A Theory of Subjective Learning, Second Version”  
We study an individual who faces a dynamic decision problem in which the process of information arrival is unobserved by the analyst. We elicit subjective information directly from choice behavior by deriving two utility representations of preferences over menus of acts. The most general representation identifies a unique probability distribution over the set of posteriors that the decision maker might face at the time of choosing from the menu. We use this representation to characterize a notion of ”more preference for flexibility” via a subjective analogue of Blackwell’s (1951, 1953) comparisons of experiments. A more specialized representation uniquely identifies information as a partition of the state space. This result allows us to compare individuals who expect to learn differently, even if they do not agree on their prior beliefs. On the extended domain of datedmenus, we show how to accommodate an individual who expects to learn gradually over time by means of a subjective filtration. Download Paper


13013 
Milan Lakicevic Milos Vulanovic 
"On Mergers, Acquisitions and Liquidation Using Specified Purpose Acquisition Companies (SPACs)"  
A Specified Purpose Acquisition Company (SPAC) is formed to purchase operating businesses within a priori determined time period. SPACs existed in U.S capital markets since the 1920s. Their corporate structure has recently become debated in the legal and financial literatures, especially their structural response to regulations by the Security and Exchange Commission (SEC) in the late 1990s. SPACs were traded on American Stock Exchange and Overt the Counter Bulletin Board. Since 2008, SPACs are listed on New York Stock Exchange and National Association of Securities Dealers Automated Quotations. This paper examines the determinants of the execution of mergers by SPACs. Download Paper


13012 
Antonio M. Merlo Thomas R. Palfrey 
"External Validation of Voter Turnout Models by Concealed Parameter Recovery"  
We conduct a model validation analysis of several behavioral models of voter turnout, using laboratory data. We call our method of model validation concealed parameter recovery, where estimation of a model is done under a veil of ignorance about some of the experimentally controlled parameters — in this case voting costs. We use quantal response equilibrium as the underlying, common structure for estimation, and estimate models of instrumental voting, altruistic voting, expressive voting, and ethical voting. All the models except the ethical model recover the concealed parameters reasonably well. We also report the results of a counterfactual analysis based on the recovered parameters, to compare the policy implications of the different models about the cost of a subsidy to increase turnout. Download Paper


13011 
Felipe E. Saffie Sina T. Ates 
"Project Heterogeneity and Growth: The Impact of Selection"  
In the classical literature of innovationbased endogenous growth, the main engine of long run economic growth is firm entry. Nevertheless, when projects are heterogeneous, and good ideas are scarce, a masscomposition trade off is introduced into this link: larger cohorts are characterized by a lower average quality. As one of the roles of the financial system is to screen the quality of projects, the ability of financial intermediaries to detect promising projects shapes the strength of this tradeoff. In order to study this relationship, we build a general equilibrium endogenous growth model with project heterogeneity and financial screening. To illustrate the relevance of the mass and composition margins we apply this framework to two important debates in the growth literature. First, we show that corporate taxation has only a weak effect in growth, but a strong effect on firm entry, both well known empirical regularities. A second illustration studies the effects of financial development in growth. A word of caution arises: for economies that are characterized by high rates of firm creation, domestic credit should not be used as a proxy of financial development, in contrast to most of the empirical literature. Download Paper


13010 
Andrea Mattozzi Antonio M. Merlo 
"Mediocracy", Fourth Version  
We study the recruitment of individuals in the political sector. We propose an equilibrium model of political recruitment by two political parties competing in an election. We show that political parties may deliberately choose to recruit only mediocre politicians, in spite of the fact that they could select better individuals. Furthermore, we show that this phenomenon is more likely to occur in proportional than in majoritarian electoral systems. Download Paper


13009 
Ju Hu 
"Reputation in the Presence of Noisy Exogenous Learning"  
This paper studies the reputation effect in which a longlived player faces a sequence of uninformed shortlived players and the uninformed players receive informative but noisy exogenous signals about the type of the longlived player. We provide an explicit lower bound on all Nash equilibrium payoffs of the longlived player. The lower bound shows when the exogenous signals are sufficiently noisy and the longlived player is patient, he can be assured of a payoff strictly higher than his minmax payoff Download Paper


13008 
Olivier Compte Andrew Postlewaite 
“Plausible Cooperation”, Third Version  
There is a large repeated games literature illustrating how future interactions provide incentives for cooperation. Much of the earlier literature assumes public monitoring: players always observe precisely the same thing. Departures from public monitoring to private monitoring that incorporate differences in players’ observations may dramatically complicate coordination and the provision of incentives, with the consequence that equilibria with private monitoring often seem unrealistically complex. We set out a model in which players accomplish cooperation in an intuitively plausible fashion. Players process information via a mental system — a set of psychological states and a transition function between states depending on observations. Players restrict attention to a relatively small set of simple strategies, and consequently, might learn which perform well. Download Paper


13007 
Itzhak Gilboa Andrew Postlewaite Larry Samuelson David Schmeidler 
"Economic Models as Analogies", Third Version  
People often wonder why economists analyze models whose assumptions are known to be false, while economists feel that they learn a great deal from such exercises. We suggest that part of the knowledge generated by academic economists is casebased rather than rulebased. That is, instead of offering general rules or theories that should be contrasted with data, economists often analyze models that are "theoretical cases", which help understand economic problems by drawing analogies between the model and the problem. According to this view, economic models, empirical data, experimental results and other sources of knowledge are all on equal footing, that is, they all provide cases to which a given problem can be compared. We offer complexity arguments that explain why casebased reasoning may sometimes be the method of choice and why economists prefer simple cases. Download Paper


13006 
Antonio M. Merlo Francois OrtaloMagne John Rust 
'The Home Selling Problem: Theory and Evidence'  
This paper formulates and solves the problem of a homeowner who wants to sell her house for the maximum possible price net of transactions costs (including real estate commissions). The optimal selling strategy consists of an initial list price with subsequent weekly decisions on how much to adjust the list price until the home is sold or withdrawn from the market. The solution also yields a sequence of reservation prices that determine whether the homeowner should accept offers from potential buyers who arrive stochastically over time with an expected arrival rate that is a decreasing function of the list price. We estimate the model using a rich data set of complete transaction histories for 780 residential properties in England introduced by Merlo and OrtaloMagné (2004). For each home in the sample, the data include all listing price changes and all offers made on the home between initial listing and the final sale agreement. The estimated model fits observed list price dynamics and other key features of the data well. In particular, we show that a very small “menu cost” of changing the listing price
(estimated to equal 10 thousandths of 1% of the house value, or approximately £10 for a home worth £100,000) is sufficient to explain the high degree of “stickiness” of listing prices observed in the data. Download Paper


13005 
Richard P. McLean Andrew Postlewaite 
"Implementation with Interdependent Valuations", Second Version  
It is wellknown that the ability of the VickreyClarkeGroves (VCG) mechanism to implement efficient outcomes for private value choice problems does not extend to interdependent value problems. When an agent’s type affects other agents’ utilities, it may not be incentive compatible for him to truthfully reveal his type when faced with CGV payments. We show that when agents are informationally small, there exist small modifications to CGV that restore incentive compatibility. We further show that truthful revelation is an approximate ex post equilibrium. Lastly, we show that in replicated settings aggregate payments sufficient to induce truthful revelation go to zero. Download Paper


13004 
Jere R. Behrman Susan W. Parker Petra E. Todd Kenneth I. Wolpin 
"Aligning Learning Incentives of Students and Teachers: Results from a Social Experiment in Mexican High Schools"  
This paper evaluates the impact of three different performance incentives schemes using data from a social experiment that randomized 88 Mexican high schools with over 40,000 students into three treatment groups and a control group. Treatment one provides individual incentives for performance on curriculumbased mathematics tests to students only, treatment two to teachers only and treatment three gives both individual and group incentives to students, teachers and school administrators. Program impact estimates reveal the largest average effects for treatment three, smaller impacts for treatment one and no impact for treatment two. Download Paper


13003 
Francis X. Diebold 
“A Personal Perspective on the Origin(s) and Development of “Big Data": The Phenomenon, the Term, and the Discipline”, Second Version  
I investigate Big Data, the phenomenon, the term, and the discipline, with emphasis on origins of the term, in industry and academics, in computer science and statistics/econometrics. Big Data the phenomenon continues unabated, Big Data the term is now firmly entrenched, and Big Data the discipline is emerging. Download Paper


13002 
Naoki Aizawa Hanming Fang 
"Equilibrium Labor Market Search and Health Insurance Reform"  
We present and empirically implement an equilibrium labor market search model where risk averse workers facing medical expenditure shocks are matched with firms making health insurance coverage decisions. Our model delivers a rich set of predictions that can account for a wide variety of phenomenon observed in the data including the correlations among firm sizes, wages, health insurance offering rates, turnover rates and workers' health compositions. We estimate our model by Generalized Method of Moments using a combination of micro data sources including Survey of Income and Program Participation (SIPP), Medical Expenditure Panel Survey (MEPS) and Robert Wood Johnson Foundation Employer Health Insurance Survey. We use our estimated model to evaluate the equilibrium impact of the 2010 Affordable Care Act (ACA) and find that it would reduce the uninsured rate among the workers in our estimation sample from 20.12% to 7.27%. We also examine a variety of alternative policies to understand the roles of different components of the ACA in contributing to these equilibrium changes. Interestingly, we find that the uninsured rate will be even lower (at 6.44%) if the employer mandate in the ACA is eliminated. Download Paper


13001 
David Dillenberger Andrew Postlewaite Kareen Rozen 
“Optimism and Pessimism with Expected Utility”, Third Version  
Savage (1954) provides axioms on preferences over acts that are equivalent to the existence of a subjective expected utility representation. We show that there is a continuum of other \expected utility" representations in which for any act, the probability distribution over states depends on the corresponding outcomes and is firstorder stochastically dominated by (dominates) the Savage distribution. We suggest that pessimism (optimism) can be captured by the stakedependent probabilities in these alternate representations. We then extend the DM's preferences to be defined over both subjective acts and objective lotteries. Our result permits modeling ambiguity aversion in Ellsberg's twourn experiment using pessimistic probability assessments, the same utility over prizes for lotteries and acts, and without relaxing Savage's axioms. An implication of our results is that the large body of existing research based on expected utility can, with a simple reinterpretation, be understood as modeling the behavior of optimistic or pessimistic decision makers. Download Paper


12048 
Guido Menzio 
"Shopping Externalities and SelfFulfilling Unemployment Fluctuations"  
We propose a novel theory of selffulfilling fluctuations in the labor market. A firm employing an additional worker generates positive externalities on other firms, because employed workers have more income to spend and have less time to shop for low prices than unemployed workers. We quantify these shopping externalities and show that they are sufficiently strong to create strategic complementarities in the employment decisions of different firms and to generate multiple rational expectations equilibria. Equilibria differ with respect to the agents’ (rational) expectations about future unemployment. We show that negative shocks to agents’ expectations lead to fluctuations in vacancies, unemployment, labor productivity and the stock market that closely resemble those observed in the US during the Great Recession. Download Paper


12047 
Harold L. Cole Soojin Kim Dirk Krueger 
"Analyzing the Effects of Insuring Health Risks: On the Tradeoff between Short Run Insurance Benefits vs. Long Run Incentive Costs"  
This paper constructs a dynamic model of health insurance to evaluate the short and long run effects of policies that prevent firms from conditioning wages on health conditions of their workers, and that prevent health insurance companies from charging individuals with adverse health conditions higher insurance premia. Our study is motivated by recent US legislation that has tightened regulations on wage
discrimination against workers with poorer health status (Americans with Disability Act of 2009, ADA, and ADA Amendments Act of 2008, ADAAA) and that will prohibit health insurance companies from charging different premiums for workers of different health status starting in 2014 (Patient Protection and Affordable Care Act, PPACA). In the model, a tradeoff arises between the static gains from better insurance against poor health induced by these policies and their adverse dynamic incentive effects on household efforts to lead a healthy life. Using household panel data from the PSID we estimate and calibrate the model and then use it to evaluate the static and dynamic consequences of nowage discrimination and noprior conditions laws for the evolution of the crosssectional health and consumption distribution of a cohort of households, as well as exante lifetime utility of a typical member of this cohort. In our quantitative analysis we find that although a combination of both policies is effective in providing full consumption insurance period by period, it is suboptimal to introduce both policies jointly since such policy innovation induces a more rapid deterioration of the cohort health distribution over time. This is due to the fact that combination of both laws severely undermines the incentives to lead healthier lives. The resulting negative effects on health outcomes in society more than offset the static gains from better consumption insurance so that expected discounted lifetime utility is lower under both
policies, relative to only implementing wage nondiscrimination legislation. Download Paper


12046 
Xu Cheng Bruce E. Hansen 
"Forecasting with FactorAugmented Regression: A Frequentist Model Averaging Approach"  
This paper considers forecast combination with factoraugmented regression. In this framework, a large number of forecasting models are available, varying by the choice of factors and the number of lags. We investigate forecast combination using weights that minimize the Mallows and the leavehout cross validation criteria. The unobserved factor regressors are estimated by principle components of a large panel with N predictors over T periods. With these generated regressors, we show that the Mallows and leavehout cross validation criteria are approximately unbiased estimators of the onestepahead and multistepahead mean squared forecast errors, respectively, provided that N, T —› ∞. In contrast to wellknown results in the literature, the generatedregressor issue can be ignored for forecast combination, without restrictions on the relation between N and T. Simulations show that the Mallows model averaging and leavehout crossvalidation averaging methods yield lower mean squared forecast errors than alternative model selection and averaging methods such as AIC, BIC, cross validation, and Bayesian model averaging. We apply the proposed methods to the U.S. macroeconomic data set in Stock and Watson (2012) and find that they compare favorably to many popular shrinkagetype forecasting methods. Download Paper


12045 
Xu Cheng Zhipeng Liao 
"Select the Valid and Relevant Moments: A OneStep Procedure for GMM with Many Moments"  
This paper considers the selection of valid and relevant moments for the generalized method of moments (GMM) estimation. For applications with many candidate moments, our asymptotic analysis ccommodates a diverging number of moments as the sample size increases. The proposed procedure achieves three objectives in onestep: (i) the valid and relevant moments are selected simultaneously rather than sequentially; (ii) all desired moments are selected together instead of in a stepwise manner; (iii) the parameter of interest is automatically estimated with all selected moments as opposed to a postselection estimation. The new moment selection method is achieved via an informationbased adaptive GMM shrinkage estimation, where an appropriate penalty is attached to the standard GMM criterion to link moment selection to shrinkage estimation. The penalty is designed to signal both moment validity and relevance for consistent moment selection and efficient estimation. The asymptotic analysis allows for nonsmooth sample moments and weakly dependent observations, making it generally applicable.
For practical implementation, this onestep procedure is computationally attractive. Download Paper


12044 
Yuichi Yamamoto 
"Individual Learning and Cooperation in Noisy Repeated Games"  
We investigate whether two players in a longrun relationship can maintain cooperation when the details of the underlying game are unknown. Specifically, we consider a new class of repeated games with private monitoring, where an unobservable state of the world influences the payoff functions and/or the monitoring structure. Each player privately learns the state over time, but cannot observe what the opponent learns. We show that there are robust equilibria where players eventually obtain payoffs as if the true state were common knowledge and players played a “belieffree” equilibrium. The result is applied to various examples, including secret pricecutting with unknown demand. Download Paper
