Paper # Author Title
This paper examines the importance of realized volatility in bond yield density prediction. We incorporate realized volatility into a Dynamic Nelson-Siegel (DNS) model with stochastic volatility and evaluate its predictive performance on US bond yield data. When compared to popular specifications in the DNS literature without realized volatility, we find that having this information improves density forecasting performance. Download Paper
Asymmetric information is an important source of inefficiency when an asset (such as a firm) is transacted. The two main sources of this asymmetry are the unobserved idiosyncratic characteristics of the asset (such as future profitability) and unobserved idiosyncratic choices (like secret price cuts). Buyers may use noisy signals (such as sales) in order to infer actions and characteristics. In this situation, does the seller prefer to release information fast or slowly? Is it incentive compatible? When the market is pessimistic, is it better to give up or keep signaling? We introduce hidden actions in a dynamic signaling model in order to answer these questions. Separation is found to be fast in equilibrium when sending highly informative signals is efficient. When the market is pessimistic about the quality of the asset, depending on the cost structure, the seller either “gives-up” by stopping signaling, or the seller “rushes-out” by increasing the informativeness of the signal. We find that the unobservability of the action causes equilibrium effort to be too low and the seller to stop signaling too early. The model can be applied to education where grades depend on students’ effort, which is endogenously related to their skills. Download Paper
This paper studies the selection of valid and relevant moments for the generalized method of moments (GMM) estimation. For applications with many candidate moments, our asymptotic analysis accommodates a diverging number of moments as the sample size increases. The proposed procedure achieves three objectives in one-step: (i) the valid and relevant moments are distinguished from the invalid or irrelevant ones; (ii) all desired moments are selected in one step instead of in a stepwise manner; (iii) the parameters of interest are automatically estimated with all selected moments as opposed to a post-selection estimation. The new method performs moment selection and efficient estimation simultaneously via an information-based adaptive GMM shrinkage estimation, where an appropriate penalty is attached to the standard GMM criterion to link moment selection to shrinkage estimation. The penalty is designed to signal both moment validity and relevance for consistent moment selection. We develop asymptotic results for the high-dimensional GMM shrinkage estimator, allowing for non-smooth sample moments and weakly dependent observations. For practical implementation, this one-step procedure is computationally attractive. Download Paper
This paper considers forecast combination with factor-augmented regression. In this framework, a large number of forecasting models are available, varying by the choice of factors and the number of lags. We investigate forecast combination across models using weights that minimize the Mallows and the leave-h-out cross validation criteria. The unobserved factor regressors are estimated by principle components of a large panel with N predictors over T periods. With these generated regressors, we show that the Mallows and leave-h-out cross validation criteria are asymptotically unbiased estimators of the one-step-ahead and multi-step-ahead mean squared forecast errors, respectively, provided that N, T → ∞. (However, the paper does not establish any optimality properties for the methods.) In contrast to well-known results in the literature, this result suggests that the generated-regressor issue can be ignored for forecast combination, without restrictions on the relation between N and T. Simulations show that the Mallows model averaging and leave-h-out cross-validation averaging methods yield lower mean squared forecast errors than alternative model selection and averaging methods such as AIC, BIC, cross validation, and Bayesian model averaging. We apply the proposed methods to the U.S. macroeconomic data set in Stock and Watson (2012) and find that they compare favorably to many popular shrinkage-type forecasting methods. Download Paper
We analyze a model in which agents make investments and then match into pairs to create a surplus. The agents can make transfers to reallocate their pretransfer ownership claims on the surplus. Mailath, Postlewaite, and Samuelson (2013) showed that when investments are unobservable, equilibrium investments are generally inefficient. In this paper we work with a more structured model that is sufficiently tractable to analyze the nature of the investment inefficiencies. We provide conditions under which investment is inefficiently high or low and conditions under which changes in the pretransfer ownership claims on the surplus will be Pareto improving, as well as examine how the degree of heterogeneity on either side of the market affects investment efficiency. Download Paper
A recent literature has developed that combines two prominent empirical approaches to ex ante policy evaluation: randomized controlled trials (RCT) and structural estimation. The RCT provides a “gold-standard" estimate of a particular treatment, but only of that treatment. Structural estimation provides the capability to extrapolate beyond the experimental treatment, but is based on untestable assumptions and is subject to structural data mining. Combining the approaches by holding out from the structural estimation exercise either the treatment or control sample allows for external validation of the underlying behavioral model. Although intuitively appealing, this holdout methodology is not well grounded. For instance, it is easy to show that it is suboptimal from a Bayesian perspective. Using a stylized representation of a randomized controlled trial, we provide a formal rationale for the use of a holdout sample in an environment in which data mining poses an impediment to the implementation of the ideal Bayesian analysis and a numerical illustration of the potential benefits of holdout samples. Download Paper
We study the nonparametric identification and estimation of a structural model for committee decisions. Members of a committee share a common information set, but differ in ideological bias while processing multiple information sources and in individual tastes while weighing multiple objectives. We consider two cases of the model where committee members have or don't have strategic incentives for making recommendations that conform with the committee decision. For both cases, pure-strategy Bayesian Nash equilibria exist, and we show how to use variations in the common information set to recover the distribution of members' private types from individual recommendation patterns. Building on the identification result, we estimate a structural model of interest rate decisions by the Monetary Policy Committee (MPC) at the Bank of England. We find some evidence that recommendations from external committee members are less distorted by strategic incentives than internal members. There is also evidence that MPC members differ more in their tastes for multiple objectives than in ideological bias. Download Paper
This paper studies entry and capacity decisions by dialysis providers in the U.S. We estimate a structural model where providers make strategic continuous choices of capacities based on private information about own costs and beliefs about competitors’ behaviors. We evaluate the impact on market structure and provider profits under counterfactual regulatory policies that increase per capacity cost or reduce per capacity payment. We find that these policies reduce the market capacity of dialysis stations. However, the downward sloping reaction curve shields some providers from negative profit shocks in certain markets. The paper also has a methodological contribution in that it proposes new estimators for Bayesian games with continuous actions, which differ qualitative from discrete Bayesian games such as those with binary entry decisions. Download Paper
Bidders’ risk attitudes have key implications for choices of revenue-maximizing auction formats. In ascending auctions, bid distributions do not provide information about risk preference. We infer risk attitudes using distributions of transaction prices and participation decisions in ascending auctions with entry costs. Nonparametric tests are proposed for two distinct scenarios: first, the expected entry cost can be consistently estimated from data; second, the data does not report entry costs but contains exogenous variations of potential competition and auction characteristics. In the first scenario, we exploit the fact that the risk premium required for entry - the difference between ex ante expected profits from entry and the certainty equivalent .is strictly positive if and only if bidders are risk averse. Our test is based on identification of bidders’ ex ante profits. In the second scenario, our test builds on the fact that risk attitudes affect how equilibrium entry probabilities vary with observed auction characteristics and potential competition. We also show identification of risk attitudes in a more general model of ascending auctions with selective entry, where bidders receive entry-stage signals that are correlated with private values. Download Paper
This paper studies the nonparametric identification and estimation of voters’ preferences when voters are ideological. We establish that voter preference distributions and other parameters of interest can be identified from aggregate electoral data. We also show that these objects can be consistently estimated and illustrate our analysis by performing an actual estimation using data from the 1999 European Parliament elections. Download Paper
We introduce an approach for semi-parametric inference in dynamic binary choice models that does not impose distributional assumptions on the state variables unobserved by the econometrician. The proposed framework combines Bayesian inference with partial identification results. The method is applicable to models with finite space of observed states. We demonstrate the method on Rust's model of bus engine replacement. The estimation experiments show that the parametric assumptions about the distribution of the unobserved states can have a considerable effect on the estimates of per-period payoffs. At the same time, the effect of these assumptions on counterfactual conditional choice probabilities can be small for most of the observed states. Download Paper
We develop an empirical methodology to study markets for services. These markets are typically organized as multi-attribute auctions in which buyers take into account seller's price as well as various characteristics, including quality. Our identification and estimation strategies exploit observed buyers' and sellers' decisions to recover the distribution of sellers' qualities, the distribution of seller's costs conditional on quality, and the distribution of buyers' tastes. Our empirical results from the on-line market for programming services confirm that quality plays an important role. We use our estimates to study the effect of licensing restrictions and to assess the loss of value from using standard rather than multi-attribute auctions as is common in public procurement. Download Paper
A deranged publisher decided to produce a volume of some of my papers and asked me to write some comments. Since these amount to a summary of my views about international trade theory over the latest forty years or so, I’m giving the comments a separate alternative existence as a discussion paper. Download Paper
This paper introduces a model of endogenous growth through basic and applied research. Basic research differs from applied research in the nature and the magnitude of the generated spillovers. We propose a novel way of empirically identifying these spillovers and embed them in a general equilibrium framework with private firms and a public research sector. After characterizing the equilibrium, we estimate our model using micro-level data on research expenditures by French firms. Our key finding is that standard R&D policies can accentuate the dynamic misallocation in the economy. We also find a strong complementarity between the property rights of basic research and the optimal funding of public research. Download Paper
I study a dynamic one-sided-offer bargaining model between a seller and a buyer under incomplete information. The seller knows the quality of his product while the buyer does not. During bargaining, the seller randomly receives an outside option, the value of which depends on the hidden quality. If the outside option is sufficiently important, there is an equilibrium in which the uninformed buyer fails to learn the quality and continues to make the same randomized offer throughout the bargaining process. As a result, the equilibrium behavior produces an outcome path that resembles the outcome of a bargaining deadlock and its resolution. The equilibrium with deadlock has inefficient outcomes such as a delay in reaching an agreement and a breakdown in negotiations. Bargaining inefficiencies do not vanish even with frequent offers, and they may exist when there is no static adverse selection problem. Under stronger parametric assumptions, the equilibrium with deadlock is unique under a monotonicity criterion, and all equilibria exhibit inefficient outcomes. Download Paper
The dominant academic literature about trade agreements maintains that they are only about national terms-of-trade manipulation and not at all about purely political concerns. Non-academic economists, commentators, and diplomats by contrast think that trade agreements are all about political concerns. There are two substantive and important distinctions between the two views. i Practitioners maintain that policymakers care virtually not at all about the terms of trade or about trade-tax revenue ii Practitioners, unlike academics, maintain that trade-agreement negotiations themselves change the underlying political economy. Observation of actual trade policy measures, though not conclusive, suggests that the practitioners are right and that the academics are wrong. Download Paper
We study students' dropout behavior and its consequences in a dynamic signaling model. Workers pay an education cost per unit of time and cannot commit to a fixed education length. Workers face an exogenous dropout risk before graduation. Since low-productivity students' cost is high, pooling with early dropouts helps them to avoid a high education cost. In equilibrium, low-productivity students choose to endogenously drop out over time, so the productivity of students in college increases along the education process. We find that the maximum education length is decreasing in the prior about a student being highly productive. We characterize the joint dynamics of returns to education and the dropout rate and provide an explanation of the declining dropout rate over the time students spend in school. We also extend the baseline model by allowing human capital accumulation and show that the dynamics of the dropout rate are helpful in decomposing the returns to education into the signaling effect and the human capital accumulation effect. Download Paper
We consider Coasian bargaining problems where the buyer has an outside option arriving at a stochastic time. We study both observable outside option models and unobservable outside option models. In both models, we show that a Coasian equilibrium exists if (1) the arrival of the outside option is public, or (2) the arrival of the outside option is private but the arrival probability is small enough. (1) the seller makes multiple rounds of offers, and (2) the Coase conjecture holds for an arbitrarily large arrival rate of the outside option. The result also applies to the time-varying outside option model. This exercise helps us to understand the sharp difference between Board and Pycia (2013), where the buyer's outside option is always available, and the standard Coasian bargaining literature, where the buyer has no outside option. Download Paper
We propose a new classification of consumption goods into nondurable goods, durable goods and a new class which we call “memorable" goods. A good is memorable if a consumer can draw current utility from its past consumption experience through memory. We propose a novel consumption-savings model in which a consumer has a well-defined preference ordering over both nondurable goods and memorable goods. Memorable goods consumption differs from nondurable goods consumption in that current memorable goods consumption may also impact future utility through the accumulation process of the stock of memory. In our model, households optimally choose a lumpy profile of memorable goods consumption even in a frictionless world. Using Consumer Expenditure Survey data, we then document levels and volatilities of different groups of consumption goods expenditures, as well as their expenditure patterns, and show that the expenditure patterns on memorable goods indeed differ significantly from those on nondurable and durable goods. Finally, we empirically evaluate our model's predictions with respect to the welfare cost of consumption fluctuations and conduct an excess-sensitivity test of the consumption response to predictable income changes. We find that (i) the welfare cost of household-level consumption fluctuations may be overstated by 1:7 percentage points (11:9% points as opposed to 13:6% points of permanent consumption) if memorable goods are not appropriately accounted for; (ii) the finding of excess sensitivity of consumption documented in important papers of the literature might be entirely due to the presence of memorable goods. Download Paper
Montenegro, newly independent since 2006, saw its commodity exports collapse in the worldwide financial crisis of 2008. It took three years for the volume of its exports to recover. Using one to four-digit Standard Industrial Trade Classification (SITC) commodity trade data, this paper analyzes trade patterns as they evolve, both globally, and within individual product sectors, since independence to the year 2012. The Kellman – Shachmurove Trade Specialization Index (TSI) is employed to study the degree of the Montenegrin specialization. The paper warns about high degree of specialization with overreliance on commodity exports of aluminum alloys. Download Paper
This paper, prepared for the Handbook of Game Theory, volume 4 (Peyton Young and Shmuel Zamir, editors, Elsevier Press), surveys work on reputations in repeated games of incomplete information. Download Paper
We consider all-pay auctions in the presence of interdependent, affiliated valuations and private budget constraints. For the sealed-bid, all-pay auction we characterize a symmetric equilibrium in continuous strategies for the case of N bidders. Budget constraints encourage more aggressive bidding among participants with large endowments and intermediate valuations. We extend our results to the war of attrition where we show that budget constraints lead to a uniform amplification of equilibrium bids among bidders with sufficient endowments. An example shows that with both interdependent valuations and private budget constraints, a revenue ranking between the two auction formats is generally not possible. Equilibria with discontinuous bidding strategies are discussed. Download Paper
This paper develops methods to study the evolution of agents’ expectations and uncertainty in general equilibrium models. A central insight consists of recognizing that the evolution of agents' beliefs can be captured by defining a set of regimes that are characterized by the degree of agents' pessimism, optimism, and uncertainty about future equilibrium outcomes. Once this kind of structure is imposed, it is possible to create a mapping between the evolution of agents' beliefs and observable outcomes. Agents in the model are fully rational, conduct Bayesian learning, and they know that they do not know. Therefore, agents form expectations taking into account that their beliefs will evolve according to what they observe in the future. The new modeling framework accommodates both gradual and abrupt changes in agents' beliefs and allows an analytical characterization of uncertainty. Shocks to beliefs are shown to have both first-order and second-order effects. To illustrate how to apply the methods, we use a prototypical Real Business Cycle model in which households form beliefs about the likely duration of high-growth and low-growth regimes. Download Paper
We develop a theoretical framework to quantitatively assess the general equilibrium effects and welfare implications of central bank reputation and transparency. Monetary policy alternates between periods of active inflation stabilization and periods during which the emphasis on inflation stabilization is reduced. When the central bank only engages in short deviations from active monetary policy, inflation expectations remain anchored and the model captures the monetary approach described as constrained discretion. However, if the central bank deviates for a prolonged period of time, agents gradually become pessimistic about future monetary policy, the disanchoring of inflation expectations occurs, and uncertainty rises. Reputation determines the speed with which agents’' pessimism accelerates once the central bank starts deviating. When the model is fitted to U.S. data, we find that the Federal Reserve can accommodate contractionary technology shocks for up to five years before inflation expectations take off. Increasing transparency would improve welfare by anchoring agents’ expectations. Gains from transparency are even more sizeable for countries whose central banks have weak reputation. Download Paper
This paper empirically evaluates the effects of college admissions policies on high school student performance. To this end, I build a model where high school students decide their level of effort and whether to take the college admissions test, taking into consideration how those decisions may affect their future university admission chances. Using Chilean data for the 2009 college admissions process, I structurally estimate the parameters of the model in order to study the implications of two types of counterfactual experiments: (a) a SES-Quota system, which imposes the population’s SES distribution for each university; (b) increasing the high school GPA weight. The results from these exercises support the claim that increasing the level of equal college opportunities may boost the amount of effort exerted by high school students. Specifically, I find that: (1) average effort significantly increases as opportunities are equalized across different socioeconomic groups. (2) There is a moderate improvement in high school student performance, which is relatively important for certain groups. (3) The highest reactions in terms of exerted effort come from those students who also change their decision about taking the college admissions test. (4) Neither of these policies increases the percentage of students taking the national test for college admission, which is consistent with the fact that in this policy implementation there are winners and losers. However, there are relevant variations in who is taking such a test; in particular, this percentage increases for low-income students and those who have higher level of learning skills. (5) Because the SES-Quota system uses the existing information more efficiently, it implies a more efficient student allocation to equalize opportunities. Download Paper
The art of rhetoric may be defined as changing other people’s minds (opinions, beliefs) without providing them new information. One technique heavily used by rhetoric employs analogies. Using analogies, one may draw the listener’s attention to similarities between cases and to re-organize existing information in a way that highlights certain regularities. In this paper we offer two models of analogies, discuss their theoretical equivalence, and show that finding good analogies is a computationally hard problem. Download Paper
We investigate whether two players in a long-run relationship can maintain cooperation when the details of the underlying game are unknown. Specifically, we consider a new class of repeated games with private monitoring, where an unobservable state of the world influences the payoff functions and/or the monitoring structure. Each player privately learns the state over time but cannot observe what the opponent learned. We show that there are robust equilibria in which players eventually obtain payoffs as if the true state were common knowledge and players played a “belief-free” equilibrium. We also provide explicit equilibrium constructions in various economic examples. Download Paper
Many violations of the Independence axiom of Expected Utility can be traced to subjects' attraction to risk-free prospects. Negative Certainty Independence, the key axiom in this paper, formalizes this tendency. Our main result is a utility representation of all preferences over monetary lotteries that satisfy Negative Certainty Independence together with basic rationality postulates. Such preferences can be represented as if the agent were unsure of how risk averse to be when evaluating a lottery p; instead, she has in mind a set of possible utility functions over outcomes and displays a cautious behavior: she computes the certainty equivalent of p with respect to each possible function in the set and picks the smallest one. The set of utilities is unique in a well-defined sense. We show that our representation can also be derived from a `cautious' completion of an incomplete preference relation. Download Paper
We propose a novel method to estimate dynamic equilibrium models with stochastic volatility. First, we characterize the properties of the solution to this class of models. Second, we take advantage of the results about the structure of the solution to build a sequential Monte Carlo algorithm to evaluate the likelihood function of the model. The approach, which exploits the profusion of shocks in stochastic volatility models, is versatile and computationally tractable even in large-scale models, such as those often employed by policy-making institutions. As an application, we use our algorithm and Bayesian methods to estimate a business cycle model of the U.S. economy with both stochastic volatility and parameter drifting in monetary policy. Our application shows the importance of stochastic volatility in accounting for the dynamics of the data. Download Paper
In this paper we characterize quantitatively the optimal mix of progressive income taxes and education subsidies in a model with endogeneous human capital formation, borrowing constraints, income risk and incomplete financial markets. Progressive labor income taxes provide social insurance against idiosyncratic income risk and redistributes after tax income among ex-ante heterogenous households. In addition to the standard distortions of labor supply progressive taxes also impede the incentives to acquire higher education, generation a non-trivial trade-off for the benevolent utilitarian government. The latter distortion can potentially be mitigated by an education subsidy. We find that the welfare-maximizing fiscal policy is indeed characterized by a substantially progressive labor income tax code and a positive subsidy for college education. Both the degree of tax progressivity and the education subsidy are larger than in the current U.S. status quo. Download Paper
This paper, prepared for the Handbook of Game Theory, volume 4 (Peyton Young and Shmuel Zamir, editors, Elsevier Press), surveys work on reputations in repeated games of incomplete information. Download Paper
We study dynamic bargaining with uncertainty over the buyer's valuation and the seller's outside option. A long-lived seller makes offers to a long-lived buyer whose value is private information. There may exist a short-lived buyer whose value is higher than that of the long-lived buyer. The arrival of the short-lived buyer, if she exists, is determined by a Poisson process. We characterize the unique equilibrium. The equilibrium displays interesting price fluctuations: in some periods, the seller charges a high price unacceptable to the long-lived buyer, in the hope that the short-lived buyer will appear in that period; in the other periods, he o_ers a price attractive to some values of the long-lived buyer. The price dynamics result from the interaction between two learning processes: exogenous learning about the existence of short-lived buyers, and endogenous learning about the long-lived buyer's value. Download Paper
We develop a theoretical framework to account for the observed instability of the link between inflation and fiscal imbalances across time and countries. Current policy makers’ behavior influences agents’ beliefs about the way debt will be stabilized. The standard policy mix consists of a virtuous fiscal authority that moves taxes in response to debt and a central bank that has full control over inflation. When policy makers deviate from this Virtuous regime, agents conduct Bayesian learning to infer the likely duration of the deviation. As agents observe more and more deviations, they become increasingly pessimistic about a prompt return to the Virtuous regime and inflation starts drifting in response to a fiscal imbalance. Shocks that were dormant under the Virtuous regime now start manifesting themselves. These changes are initially imperceptible, can unfold over decades, and accelerate as agents’ beliefs deteriorate. Dormant shocks explain the run-up of US inflation and uncertainty in the “70s. The currently low long-term interest rates and inflation expectations might hide the true risk of inflation faced by the US economy. Download Paper
We develop a theoretical framework to quantitatively assess the general equilibrium effects and welfare implications of central bank reputation and transparency. Monetary policy alternates between periods of active inflation stabilization and periods during which the emphasis on inflation stabilization is reduced. When the central bank only engages in short deviations from active monetary policy, inflation expectations remain anchored and the model captures the monetary approach described as constrained discretion. However, if the central bank deviates for a prolonged period of time, agents gradually become pessimistic about future monetary policy, the disanchoring of inflation expectations occurs, and uncertainty rises. Reputation determines the speed with which agents’ pessimism accelerates once the central bank starts deviating. When the model is fitted to U.S. data, we find that the Federal Reserve can accommodate contractionary technology shocks for up to five years before inflation expectations take off. Increasing transparency would improve welfare by anchoring agents’ expectations. Gains from transparency are even more sizeable for countries whose central banks have weak reputation. Download Paper
This paper develops methods to study the evolution of agents’ expectations and uncertainty in general equilibrium models. A central insight consists of recognizing that the evolution of agents. beliefs can be captured by defining a set of regimes that are characterized by the degree of agents. pessimism, optimism, and uncertainty about future equilibrium outcomes. Once this kind of structure is imposed, it is possible to create a mapping between the evolution of agents’ beliefs and observable outcomes. Agents in the model are fully rational, conduct Bayesian learning, and they know that they do not know. Therefore, agents form expectations taking into account that their beliefs will evolve according to what they observe in the future. The new modeling framework accommodates both gradual and abrupt changes in agents’ beliefs and allows an analytical characterization of uncertainty. Shocks to beliefs are shown to have both .rst-order and second-order effects. To illustrate how to apply the methods, we use a prototypical Real Business Cycle model in which households form beliefs about the likely duration of high-growth and low-growth regimes. Download Paper