Paper # Author Title
We develop a DSGE model in which the policy rate signals to price setters the central bank’s view about macroeconomic developments. The model is estimated with likelihood methods on a U.S. data set that includes the Survey of Professional Forecasters as a measure of price setters’ inflation expectations. We find that the model fits the data better than a prototypical New Keynesian DSGE model because the signaling effects of monetary policy help the model account for the run-up in inflation expectations in the 1970s. The estimated model with signaling effects delivers large and persistent real effects of monetary disturbances even though the average duration of price contracts is fairly short. While the signaling effects do not substantially alter the transmission of technology shocks, they bring about deflationary pressures in the aftermath of positive demand shocks. The signaling effects of monetary policy have contributed (i ) to heightening inflation expectations in the 1970s, (ii ) to raising inflation and to exacerbating the recession during the first years of Volcker’s monetary tightening, and (iii ) to subduing inflation and to stimulating economic activity from 1991 through 2007. Download Paper
We formulate a notion of stable outcomes in matching problems with one-sided asymmetric information. The key conceptual problem is to formulate a notion of a blocking pair that takes account of the inferences that the uninformed agent might make. We show that the set of stable outcomes is nonempty in incomplete-information environments, and is a superset of the set of complete-information stable outcomes. We then provide sufficient conditions for incomplete-information stable matchings to be efficient. Lastly, we define a notion of price-sustainable allocations and show that the set of incomplete-information stable matchings is a subset of the set of such allocations. Download Paper
This paper shows, using a simple model, that wasteful innovations may result in a loss-loss situation where no country experiences an increase in welfare. If some countries introduce innovations that result in harmful effects on other countries, it may cause the adversely affected countries to retaliate by imposing impediments to international trade. In a globalized and integrated World economy, such policies can only harm the countries involved. Thus, it is in both countries' best interest to encourage sustainable coordination between policies in order to better their own citizens, as well as the World's aggregate welfare Download Paper
Schumpeterian growth theory has .operationalized. Schumpeter’s notion of creative destruction by developing models based on this concept. These models shed light on several aspects of the growth process that could not be properly addressed by alternative theories. In this survey, we focus on four important aspects, namely: (i) the role of competition and market structure; (ii) firm dynamics; (iii) the relationship between growth and development with the notion of appropriate growth institutions; and (iv) the emergence and impact of long-term technological waves. In each case Schumpeterian growth theory delivers predictions that distinguish it from other growth models and which can be tested using micro data. Download Paper
Chamley (1986) and Judd (1985) showed that, in a standard neoclassical growth model with capital accumulation and infinitely lived agents, either taxing or subsidizing capital cannot be optimal in the steady state. In this paper, we introduce innovation-led growth into the Chamley-Judd framework, using a Schumpeterian growth model where productivity-enhancing innovations result from pro.t-motivated R&D investment. Our main result is that, for a given required trend of public expenditure, a zero tax/subsidy on capital becomes suboptimal. In particular, the higher the level of public expenditure and the income elasticity of labor supply, the less should capital income be subsidized and the more it should be taxed. Not taxing capital implies that labor must be taxed at a higher rate. This in turn has a detrimental effect on labor supply and therefore on the market size for innovation. At the same time, for a given labor supply, taxing capital also reduces innovation incentives, so that for low levels of public expenditure and/or labor supply elasticity it becomes optimal to subsidize capital income. Download Paper
We study the effect of menu costs on the pricing behavior of sellers and on the cross-sectional distribution of prices in the search-theoretic model of imperfect competition of Burdett and Judd (1983). We find that, when menu costs are small, the equilibrium is such that sellers follow a (Q, S, s) pricing rule. According to a (Q, S, s) rule, a seller lets inflation erode the real value of its nominal price until it reaches some point s. Then, the seller pays the menu cost and changes its nominal price so that the real value of the new price is randomly drawn from a distribution with support [S,Q], where Q is the buyer’s reservation price and S is some price between s and Q. Only when the menu cost is relatively large, the equilibrium is such that sellers follow a standard (S; s) pricing rule. We argue that whether sellers follow a (Q, S, s) or an (S, s) rule matters for the estimation of menu costs and seller-specific shocks. Download Paper
This paper studies the dynamics of workers’ on-the-job search behavior and its consequences in an equilibrium labor market. In a model with both directed search and learning about the match quality of firm-worker pairs, I highlight the job search target effect of learning: as a worker updates the evaluation of his current job, he adjusts his on-the-job search target, which results in a different job finding rate. This model generates a non-monotonic relation between the employment-to-employment transition rate and tenure, which provides a new explanation of the hump-shaped separation rate-tenure profile. Download Paper
Much of the repeated game literature is concerned with proving Folk Theorems. The logic of the exercise is to specify a particular game, and to explore for that game specification whether any given feasible (and individually rational) value vector can be an equilibrium outcome for some strategies when agents are sufficiently patient. A game specification includes a description of what agents observe at each stage. This is done by defining a monitoring structure, that is, a collection of probability distributions over the signals players receive (one distribution for each action profile players may play). Although this is simply meant to capture the fact that players don’t directly observe the actions chosen by others, constructed equilibria often depend on players precisely knowing these distributions, somewhat unrealistic in most problems of interest. We revisit the classic Folk Theorem for games with imperfect public monitoring, asking that incentive conditions hold not only for a precisely defined monitoring structure, but also for a ball of monitoring structures containing it. We show that efficiency and incentives are no longer compatible. Download Paper
We quantify the identifying power of special regressors in heteroskedastic binary regressions with median-independent or conditionally symmetric errors. We measure the identifying power using two criteria: the set of regressor values that help point identify coefficients in latent payoffs as in (Manski 1988); and the Fisher information of coefficients as in (Chamberlain 1986). We find for median-independent errors, requiring one of the regressors to be “special" (in a sense similar to (Lewbel 2000)) does not add to the identifying power or the information for coefficients. Nonetheless it does help identify the error distribution and the average structural function. For conditionally symmetric errors, the presence of a special regressor improves the identifying power by the criterion in (Manski 1988), and the Fisher information for coefficients is strictly positive under mild conditions. We propose a new estimator for coefficients that converges at the parametric rate under symmetric errors and a special regressor, and report its decent performance in small samples through simulations. Download Paper
The repeated game literature studies long run/repeated interactions, aiming to understand how repetition may foster cooperation. Conditioning future behavior on past play is crucial in this endeavor. For most situations of interest a given player does not directly observe the actions chosen by other players and must rely on noisy signals he receives about those actions. This is typically incorporated into models by defining a monitoring structure, that is, a collection of probability distributions over the signals each player receives (one distribution for each action profile players may play). Although this is simply meant to capture the fact that players don.t directly observe the actions chosen by others, constructed equilibria often depend on players precisely knowing the distributions, somewhat unrealistic in most problems of interest. This paper aims to show the fragility of belief free equilibrium constructions when one adds shocks to the monitoring structure in repeated games. Download Paper
I develop and structurally estimate a non-stationary overlapping generations equilibrium model of employment and workers' health insurance and wage compensation, to investigate the determinants of rising inequality in health insurance and wages in the U.S. over the last 30 years. I find that skill-biased technological change and the rising cost of medical care services are the two most important determinants, while the impact of Medicaid eligibility expansion is quantitatively small. I conduct counterfactual policy experiments to analyze key features of the 2010 Patient Protection and Affordable Care Act, including employer mandates and further Medicaid eligibility expansion. I find that (i) an employer mandate reduces both wage and health insurance coverage inequality, but also lowers the employment rate of less educated individuals; and (ii) further Medicaid eligibility expansion increases employment rate of less educated individuals, reduces health insurance coverage disparity, but also causes larger wage inequality. Download Paper
We build a model of firm-level innovation, productivity growth and reallocation featuring endogenous entry and exit. A key feature is the selection between high- and low-type firms, which differ in terms of their innovative capacity. We estimate the parameters of the model using detailed US Census micro data on firm-level output, R&D and patenting. The model provides a good fit to the dynamics of firm entry and exit, output and R&D, and its implied elasticities are in the ballpark of a range of micro estimates. We find industrial policy subsidizing either the R&D or the continued operation of incumbents reduces growth and welfare. For example, a subsidy to incumbent R&D equivalent to 5% of GDP reduces welfare by about 1.5% because it deters entry of new high-type firms. 0n the contrary, substantial improvements (of the order of 5% improvement in welfare) are possible if the continued operation of incumbents is taxed while at the same time R&D by incumbents and new entrants is subsidized. This is because of a strong selection effect: R&D resources (skilled labor) are inefficiently used by low-type incumbent firms. Subsidies to incumbents encourage the survival and expansion of these firms at the expense of potential high-type entrants. We show that optimal policy encourages the exit of low-type firms and supports R&D by high-type incumbents and entry. Download Paper
Standard Bayesian models assume agents know and fully exploit prior distributions over types. We are interested in modeling agents who lack detailed knowledge of prior distributions. In auctions, that agents know priors has two consequences: (i) signals about own valuation come with precise inference about signals received by others; (ii) noisier estimates translate into more weight put on priors. We revisit classic questions in auction theory, exploring environments in which no such complex inferences are precluded. This is done in a parsimonious model of auctions in which agents are restricted to using simple strategies. Download Paper
We provide a new and superior measure of U.S. GDP, obtained by applying optimal signal-extraction techniques to the (noisy) expenditure-side and income-side estimates. Its properties - particularly as regards serial correlation - differ markedly from those of the standard expenditure-side measure and lead to substantially-revised views regarding the properties of GDP. Download Paper
This paper explores the impact of volatility estimation methods on theoretical option values based upon the Black-Scholes-Merton (BSM) model. Volatility is the only input used in the BSM model that cannot be observed in the market or a priori determined in a contract. Thus, properly calculating volatility is crucial. Two approaches to estimate volatility are implied volatility and historical prices. Iterative techniques are applied, based on daily S&P index options. Additionally, using option data on S&P 500 Index listed on the Chicago Board of Options Exchange, historical volatility can be estimated. Download Paper
We study an individual who faces a dynamic decision problem in which the process of information arrival is unobserved by the analyst. We elicit subjective information directly from choice behavior by deriving two utility representations of preferences over menus of acts. The most general representation identifies a unique probability distribution over the set of posteriors that the decision maker might face at the time of choosing from the menu. We use this representation to characterize a notion of ”more preference for flexibility” via a subjective analogue of Blackwell’s (1951, 1953) comparisons of experiments. A more specialized representation uniquely identifies information as a partition of the state space. This result allows us to compare individuals who expect to learn differently, even if they do not agree on their prior beliefs. On the extended domain of dated-menus, we show how to accommodate an individual who expects to learn gradually over time by means of a subjective filtration. Download Paper
A Specified Purpose Acquisition Company (SPAC) is formed to purchase operating businesses within a priori determined time period. SPACs existed in U.S capital markets since the 1920s. Their corporate structure has recently become debated in the legal and financial literatures, especially their structural response to regulations by the Security and Exchange Commission (SEC) in the late 1990s. SPACs were traded on American Stock Exchange and Overt the Counter Bulletin Board. Since 2008, SPACs are listed on New York Stock Exchange and National Association of Securities Dealers Automated Quotations. This paper examines the determinants of the execution of mergers by SPACs. Download Paper
We conduct a model validation analysis of several behavioral models of voter turnout, using laboratory data. We call our method of model validation concealed parameter recovery, where estimation of a model is done under a veil of ignorance about some of the experimentally controlled parameters — in this case voting costs. We use quantal response equilibrium as the underlying, common structure for estimation, and estimate models of instrumental voting, altruistic voting, expressive voting, and ethical voting. All the models except the ethical model recover the concealed parameters reasonably well. We also report the results of a counterfactual analysis based on the recovered parameters, to compare the policy implications of the different models about the cost of a subsidy to increase turnout. Download Paper
In the classical literature of innovation-based endogenous growth, the main engine of long run economic growth is firm entry. Nevertheless, when projects are heterogeneous, and good ideas are scarce, a mass-composition trade off is introduced into this link: larger cohorts are characterized by a lower average quality. As one of the roles of the financial system is to screen the quality of projects, the ability of financial intermediaries to detect promising projects shapes the strength of this trade-off. In order to study this relationship, we build a general equilibrium endogenous growth model with project heterogeneity and financial screening. To illustrate the relevance of the mass and composition margins we apply this framework to two important debates in the growth literature. First, we show that corporate taxation has only a weak effect in growth, but a strong effect on firm entry, both well known empirical regularities. A second illustration studies the effects of financial development in growth. A word of caution arises: for economies that are characterized by high rates of firm creation, domestic credit should not be used as a proxy of financial development, in contrast to most of the empirical literature. Download Paper
We study the recruitment of individuals in the political sector. We propose an equilibrium model of political recruitment by two political parties competing in an election. We show that political parties may deliberately choose to recruit only mediocre politicians, in spite of the fact that they could select better individuals. Furthermore, we show that this phenomenon is more likely to occur in proportional than in majoritarian electoral systems. Download Paper
This paper studies the reputation effect in which a long-lived player faces a sequence of uninformed short-lived players and the uninformed players receive informative but noisy exogenous signals about the type of the long-lived player. We provide an explicit lower bound on all Nash equilibrium payoffs of the long-lived player. The lower bound shows when the exogenous signals are sufficiently noisy and the long-lived player is patient, he can be assured of a payoff strictly higher than his minmax payoff Download Paper
There is a large repeated games literature illustrating how future interactions provide incentives for cooperation. Much of the earlier literature assumes public monitoring: players always observe precisely the same thing. Departures from public monitoring to private monitoring that incorporate differences in players’ observations may dramatically complicate coordination and the provision of incentives, with the consequence that equilibria with private monitoring often seem unrealistically complex. We set out a model in which players accomplish cooperation in an intuitively plausible fashion. Players process information via a mental system — a set of psychological states and a transition function between states depending on observations. Players restrict attention to a relatively small set of simple strategies, and consequently, might learn which perform well. Download Paper
People often wonder why economists analyze models whose assumptions are known to be false, while economists feel that they learn a great deal from such exercises. We suggest that part of the knowledge generated by academic economists is case-based rather than rule-based. That is, instead of offering general rules or theories that should be contrasted with data, economists often analyze models that are "theoretical cases", which help understand economic problems by drawing analogies between the model and the problem. According to this view, economic models, empirical data, experimental results and other sources of knowledge are all on equal footing, that is, they all provide cases to which a given problem can be compared. We offer complexity arguments that explain why case-based reasoning may sometimes be the method of choice and why economists prefer simple cases. Download Paper
This paper formulates and solves the problem of a homeowner who wants to sell her house for the maximum possible price net of transactions costs (including real estate commissions). The optimal selling strategy consists of an initial list price with subsequent weekly decisions on how much to adjust the list price until the home is sold or withdrawn from the market. The solution also yields a sequence of reservation prices that determine whether the homeowner should accept offers from potential buyers who arrive stochastically over time with an expected arrival rate that is a decreasing function of the list price. We estimate the model using a rich data set of complete transaction histories for 780 residential properties in England introduced by Merlo and Ortalo-Magné (2004). For each home in the sample, the data include all listing price changes and all offers made on the home between initial listing and the final sale agreement. The estimated model fits observed list price dynamics and other key features of the data well. In particular, we show that a very small “menu cost” of changing the listing price (estimated to equal 10 thousandths of 1% of the house value, or approximately £10 for a home worth £100,000) is sufficient to explain the high degree of “stickiness” of listing prices observed in the data. Download Paper
It is well-known that the ability of the Vickrey-Clarke-Groves (VCG) mechanism to implement efficient outcomes for private value choice problems does not extend to interdependent value problems. When an agent’s type affects other agents’ utilities, it may not be incentive compatible for him to truthfully reveal his type when faced with CGV payments. We show that when agents are informationally small, there exist small modifications to CGV that restore incentive compatibility. We further show that truthful revelation is an approximate ex post equilibrium. Lastly, we show that in replicated settings aggregate payments sufficient to induce truthful revelation go to zero. Download Paper
This paper evaluates the impact of three different performance incentives schemes using data from a social experiment that randomized 88 Mexican high schools with over 40,000 students into three treatment groups and a control group. Treatment one provides individual incentives for performance on curriculum-based mathematics tests to students only, treatment two to teachers only and treatment three gives both individual and group incentives to students, teachers and school administrators. Program impact estimates reveal the largest average effects for treatment three, smaller impacts for treatment one and no impact for treatment two. Download Paper
I investigate Big Data, the phenomenon, the term, and the discipline, with emphasis on origins of the term, in industry and academics, in computer science and statistics/econometrics. Big Data the phenomenon continues unabated, Big Data the term is now firmly entrenched, and Big Data the discipline is emerging. Download Paper
We present and empirically implement an equilibrium labor market search model where risk averse workers facing medical expenditure shocks are matched with firms making health insurance coverage decisions. Our model delivers a rich set of predictions that can account for a wide variety of phenomenon observed in the data including the correlations among firm sizes, wages, health insurance offering rates, turnover rates and workers' health compositions. We estimate our model by Generalized Method of Moments using a combination of micro data sources including Survey of Income and Program Participation (SIPP), Medical Expenditure Panel Survey (MEPS) and Robert Wood Johnson Foundation Employer Health Insurance Survey. We use our estimated model to evaluate the equilibrium impact of the 2010 Affordable Care Act (ACA) and find that it would reduce the uninsured rate among the workers in our estimation sample from 20.12% to 7.27%. We also examine a variety of alternative policies to understand the roles of different components of the ACA in contributing to these equilibrium changes. Interestingly, we find that the uninsured rate will be even lower (at 6.44%) if the employer mandate in the ACA is eliminated. Download Paper
Savage (1954) provides axioms on preferences over acts that are equivalent to the existence of a subjective expected utility representation. We show that there is a continuum of other \expected utility" representations in which for any act, the probability distribution over states depends on the corresponding outcomes and is first-order stochastically dominated by (dominates) the Savage distribution. We suggest that pessimism (optimism) can be captured by the stake-dependent probabilities in these alternate representations. We then extend the DM's preferences to be defined over both subjective acts and objective lotteries. Our result permits modeling ambiguity aversion in Ellsberg's two-urn experiment using pessimistic probability assessments, the same utility over prizes for lotteries and acts, and without relaxing Savage's axioms. An implication of our results is that the large body of existing research based on expected utility can, with a simple reinterpretation, be understood as modeling the behavior of optimistic or pessimistic decision makers. Download Paper
We propose a novel theory of self-fulfilling fluctuations in the labor market. A firm employing an additional worker generates positive externalities on other firms, because employed workers have more income to spend and have less time to shop for low prices than unemployed workers. We quantify these shopping externalities and show that they are sufficiently strong to create strategic complementarities in the employment decisions of different firms and to generate multiple rational expectations equilibria. Equilibria differ with respect to the agents’ (rational) expectations about future unemployment. We show that negative shocks to agents’ expectations lead to fluctuations in vacancies, unemployment, labor productivity and the stock market that closely resemble those observed in the US during the Great Recession. Download Paper
This paper constructs a dynamic model of health insurance to evaluate the short- and long run effects of policies that prevent firms from conditioning wages on health conditions of their workers, and that prevent health insurance companies from charging individuals with adverse health conditions higher insurance premia. Our study is motivated by recent US legislation that has tightened regulations on wage discrimination against workers with poorer health status (Americans with Disability Act of 2009, ADA, and ADA Amendments Act of 2008, ADAAA) and that will prohibit health insurance companies from charging different premiums for workers of different health status starting in 2014 (Patient Protection and Affordable Care Act, PPACA). In the model, a trade-off arises between the static gains from better insurance against poor health induced by these policies and their adverse dynamic incentive effects on household efforts to lead a healthy life. Using household panel data from the PSID we estimate and calibrate the model and then use it to evaluate the static and dynamic consequences of no-wage discrimination and no-prior conditions laws for the evolution of the cross-sectional health and consumption distribution of a cohort of households, as well as ex-ante lifetime utility of a typical member of this cohort. In our quantitative analysis we find that although a combination of both policies is effective in providing full consumption insurance period by period, it is suboptimal to introduce both policies jointly since such policy innovation induces a more rapid deterioration of the cohort health distribution over time. This is due to the fact that combination of both laws severely undermines the incentives to lead healthier lives. The resulting negative effects on health outcomes in society more than offset the static gains from better consumption insurance so that expected discounted lifetime utility is lower under both policies, relative to only implementing wage nondiscrimination legislation. Download Paper
This paper considers forecast combination with factor-augmented regression. In this framework, a large number of forecasting models are available, varying by the choice of factors and the number of lags. We investigate forecast combination using weights that minimize the Mallows and the leave-h-out cross validation criteria. The unobserved factor regressors are estimated by principle components of a large panel with N predictors over T periods. With these generated regressors, we show that the Mallows and leave-h-out cross validation criteria are approximately unbiased estimators of the one-step-ahead and multi-step-ahead mean squared forecast errors, respectively, provided that N, T —› ∞. In contrast to well-known results in the literature, the generated-regressor issue can be ignored for forecast combination, without restrictions on the relation between N and T. Simulations show that the Mallows model averaging and leave-h-out cross-validation averaging methods yield lower mean squared forecast errors than alternative model selection and averaging methods such as AIC, BIC, cross validation, and Bayesian model averaging. We apply the proposed methods to the U.S. macroeconomic data set in Stock and Watson (2012) and find that they compare favorably to many popular shrinkage-type forecasting methods. Download Paper
This paper considers the selection of valid and relevant moments for the generalized method of moments (GMM) estimation. For applications with many candidate moments, our asymptotic analysis ccommodates a diverging number of moments as the sample size increases. The proposed procedure achieves three objectives in one-step: (i) the valid and relevant moments are selected simultaneously rather than sequentially; (ii) all desired moments are selected together instead of in a stepwise manner; (iii) the parameter of interest is automatically estimated with all selected moments as opposed to a post-selection estimation. The new moment selection method is achieved via an information-based adaptive GMM shrinkage estimation, where an appropriate penalty is attached to the standard GMM criterion to link moment selection to shrinkage estimation. The penalty is designed to signal both moment validity and relevance for consistent moment selection and efficient estimation. The asymptotic analysis allows for non-smooth sample moments and weakly dependent observations, making it generally applicable. For practical implementation, this one-step procedure is computationally attractive. Download Paper
We investigate whether two players in a long-run relationship can maintain cooperation when the details of the underlying game are unknown. Specifically, we consider a new class of repeated games with private monitoring, where an unobservable state of the world influences the payoff functions and/or the monitoring structure. Each player privately learns the state over time, but cannot observe what the opponent learns. We show that there are robust equilibria where players eventually obtain payoffs as if the true state were common knowledge and players played a “belief-free” equilibrium. The result is applied to various examples, including secret pricecutting with unknown demand. Download Paper
We study perfect information games with an infinite horizon played by an arbitrary number of players. This class of games includes infinitely repeated perfect information games, repeated games with asynchronous moves, games with long and short run players, games with overlapping generations of players, and canonical non-cooperative models of bargaining. We consider two restrictions on equilibria. An equilibrium is purifiable if close by behavior is consistent with equilibrium when agents' payoffs at each node are perturbed additively and independently. An equilibrium has bounded recall if there exists K such that at most one player's strategy depends on what happened more than K periods earlier. We show that only Markov equilibria have bounded memory and are purifiable. Thus if a game has at most one long-run player, all purifiable equilibria are Markov. Download Paper