Paper # Author Title
This paper considers the problem of forecasting a collection of short time series using cross sectional information in panel data. We construct point predictors using Tweedie's formula for the posterior mean of heterogeneous coefficients under a correlated random effects distribution. This formula utilizes cross-sectional information to transform the unit-specific (quasi) maximum likelihood estimator into an approximation of the posterior mean under a prior distribution that equals the population distribution of the random coefficients. We show that the risk of a predictor based on a non-parametric estimate of the Tweedie correction is asymptotically equivalent to the risk of a predictor that treats the correlated-random-effects distribution as known (ratio-optimality). Our empirical Bayes predictor performs well compared to various competitors in a Monte Carlo study. In an empirical application we use the predictor to forecast revenues for a large panel of bank holding companies and compare forecasts that condition on actual and severely adverse macroeconomic conditions. Download Paper
The accuracy of particle filters for nonlinear state-space models crucially depends on the proposal distribution that mutates time t − 1 particle values into time t values. In the widely-used bootstrap particle filter this distribution is generated by the state- transition equation. While straightforward to implement, the practical performance is often poor. We develop a self-tuning particle filter in which the proposal distribution is constructed adaptively through a sequence of Monte Carlo steps. Intuitively, we start from a measurement error distribution with an inflated variance, and then gradually reduce the variance to its nominal level in a sequence of steps that we call tempering. We show that the filter generates an unbiased and consistent approximation of the likelihood function. Holding the run time fixed, our filter is substantially more accurate in two DSGE model applications than the bootstrap particle filter. Download Paper
This chapter provides an overview of solution and estimation techniques for dynamic stochastic general equilibrium (DSGE) models. We cover the foundations of numerical approximation techniques as well as statistical inference and survey the latest developments in the field. Download Paper
Recent work has analyzed the forecasting performance of standard dynamic stochastic general equilibrium (DSGE) models, but little attention has been given to DSGE models that incorporate nonlinearities in exogenous driving processes. Against that background, we explore whether incorporating stochastic volatility improves DSGE forecasts (point, interval, and density). We examine real-time forecast accuracy for key macroeconomic variables including output growth, inflation, and the policy rate. We find that incorporating stochastic volatility in DSGE models of macroeconomic fundamentals markedly improves their density forecasts, just as incorporating stochastic volatility in models of financial asset returns improves their density forecasts. Download Paper
We propose and solve a small-scale New-Keynesian model with Markov sunspot shocks that move the economy between a targeted-inflation regime and a deflation regime and fit it to data from the U.S. and Japan. For the U.S. we find that adverse demand shocks have moved the economy to the zero lower bound (ZLB) in 2009 and an expansive monetary policy has kept it there subsequently. In contrast, Japan has experienced a switch to the deflation regime in 1999 and remained there since then, except for a short period. The two scenarios have drastically different implications for macroeconomic policies. Fiscal multipliers are about 20% smaller in the deflationary regime, despite the economy remaining at the ZLB. While a commitment by the central bank to keep rates near the ZLB doubles the fiscal multipliers in the targeted-inflation regime (U.S.), it has no effect in the deflation regime (Japan). Download Paper
We provide a novel methodology for estimating time-varying weights in linear prediction pools, which we call Dynamic Pools, and use it to investigate the relative forecasting performance of DSGE models with and without financial frictions for output growth and inflation from 1992 to 2011. We find strong evidence of time variation in the pool's weights, reflecting the fact that the DSGE model with financial frictions produces superior forecasts in periods of financial distress but does not perform as well in tranquil periods. The dynamic pool's weights react in a timely fashion to changes in the environment, leading to real-time forecast improvements relative to other methods of density forecast combination, such as Bayesian Model Averaging, optimal (static) pools, and equal weights. We show how a policymaker dealing with model uncertainty could have used a dynamic pools to perform a counterfactual exercise (responding to the gap in labor market conditions) in the immediate aftermath of the Lehman crisis. Download Paper
A recent literature has developed that combines two prominent empirical approaches to ex ante policy evaluation: randomized controlled trials (RCT) and structural estimation. The RCT provides a “gold-standard" estimate of a particular treatment, but only of that treatment. Structural estimation provides the capability to extrapolate beyond the experimental treatment, but is based on untestable assumptions and is subject to structural data mining. Combining the approaches by holding out from the structural estimation exercise either the treatment or control sample allows for external validation of the underlying behavioral model. Although intuitively appealing, this holdout methodology is not well grounded. For instance, it is easy to show that it is suboptimal from a Bayesian perspective. Using a stylized representation of a randomized controlled trial, we provide a formal rationale for the use of a holdout sample in an environment in which data mining poses an impediment to the implementation of the ideal Bayesian analysis and a numerical illustration of the potential benefits of holdout samples. Download Paper
We provide a new and superior measure of U.S. GDP, obtained by applying optimal signal-extraction techniques to the (noisy) expenditure-side and income-side estimates. Its properties - particularly as regards serial correlation - differ markedly from those of the standard expenditure-side measure and lead to substantially-revised views regarding the properties of GDP. Download Paper
We propose and illustrate a Markov-switching multi-fractal duration (MSMD) model for analysis of inter-trade durations in financial markets. We establish several of its key properties with emphasis on high persistence (indeed long memory). Empirical exploration suggests MSMD's superiority relative to leading competitors. Download Paper
Two often-divergent U.S. GDP estimates are available, a widely-used expenditure side version, GDPE, and a much less widely-used income-side version, GDPI . We propose and explore a "forecast combination" approach to combining them. We then put the theory to work, producing a superior combined estimate of GDP growth for the U.S., GDPC. We compare GDPC to GDPE and GDPI, with particular attention to behavior over the business cycle. We discuss several variations and extensions. Download Paper
This paper considers a prototypical monetary business cycle model for the U.S. economy, in which the equilibrium is undetermined if monetary policy is 'inactive'. In previous multivariate studies it has been common practice to restrict parameter estimates to values for which the equilibrium is unique. We show how the likelihood-based estimation of dynamic stochastic general equilibrium models can be extended to allow for indeterminacies and sunspot fluctuations. We propose a posterior odds test for the hypothesis that the data are best explained by parameters that imply determinacy. Our empirical results show that the Volcker-Greenspan policy regime is consistent with determinacy, whereas the pre-Volcker regime is not. We find that before 1979 non-fundamental sunspot shocks may have contributed significantly to inflation and interest rate volatility, but essentially did not affect output fluctuations. Download Paper
This paper uses a simple New-Keynesian monetary DSGE model as a prior for a VAR, shows that the resulting model is competitive with standard benchmarks in terms of forecasting, and can be used for policy analysis. Download Paper
We provide a computationally simple method of including and analyzing the effects of sunspot shocks in linear rational expectations models when the equilibrium is indeterminate. Under non uniqueness sunspots can affect model dynamics through endogenous forecast errors that do not completely adjust to fundamental shocks alone. We show that sunspot shocks can ire modeled as exogenous belief shocks which can be included in the set of fundamentals. By means of a simple example we illustrate that the exact specification of the transmission mechanism of the belief shocks is irrelevant for the solution of the model. Download Paper
This paper suggests that skill accumulation through past work experience, or "learning-by-doing", can provide an important propagation mechanism for initial shocks, as the current labor supply affects future productivity. Our econometric analysis uses a Bayesian approach to combine micro-level panel data with aggregate time series. Formal model evaluation shows that the introduction of the LBD mechanism improves the model's ability to fit the dynamics of aggregate output and hours. Download Paper
This paper establishes the consistency and limit distribution of minimum distance (MD) estimators for time series models with deterministic or stochastic trends. We consider models that are linear in the variables, but involve nonlinear restrictions across parameters. Two complications arise. First, the unrestricted and restricted parameter space have to be rotated to separate fast converging components of the MD estimator from slowly converging components. Second, if the model includes stochastic trends it is desirable to use a random matrix to weigh the discrepancy between the unrestricted and restricted parameter estimates. In this case, the objective function of the MD estimator has a stochastic limit. We provide regularity conditions for the non-linear restriction function that are easier to verify than the stochastic equicontinuity conditions that typically arise from direct estimation of the restricted parameters. We derive the optimal weight matrix when the limit distribution of the unrestricted estimator is mixed normal and propose a goodness-of- t test based on over-identifying restrictions. As applications, we investigate cointegration regression models, present-value models, and a permanent-income model based on a linear-quadratic dynamic programming problem. Download Paper
We investigate the role of labor-supply shifts in economic fluctuations. A new VAR identification scheme for labor supply shocks is proposed. Our method provides an alternative identification scheme, which does not rely on "zero-restrictions" . According to our VAR analysis of post-war U .S. data, labor-supply shifts account for about half the variation in hours and one-fifth of variation in aggregate output. To assess the role of labor-supply shifts in a more structural framework, estimates from a dynamic stochastic general equilibrium (DSGE) model with stochastic variation in home production technology are compared to those from the VAR. Download Paper
To generate persistence we augment the standard real business cycle (RBC) model with a "learning by doing" (LBD) mechanism, where current labor supply affects workers' future labor productivity. Our econometric analysis shows that the LBD model fits aggregate data much better than the standard RBC model. We calculate posterior odds for the structural models and formally show that the LBD model more closely mimics the autocorrelation and impulse response patterns that we found in a bivariate VAR analysis. Download Paper
A unified Bayesian framework for the econometric evaluation of dynamic stochastic general equilibrium (DSGE) models is presented. The evaluation is coherent under misspecification, that is, low posterior probability of all DSGE models in a candidate set, as well as no misspecification. The framework encompasses many of the existing evaluation schemes as special cases, including Kydland and Prescott's (1996) informal calibration and the traditional macroeconometric approach of judging models according to their ability to track and forecast aggregate time series. A detailed illustrative application of the framework to a standard cash-in-advance model and a liquidity model is provided. The models are evaluated according to their predictions of co-movements between output growth and inflation, and responses to discretionary changes in the growth rate of money supply. Download Paper
The paper considers the problem of using a vector autoregression (VAR) to forecast a stationary process several periods into the future. If the V AR is misspecified, it might be best to use the loss function under which the forecasts are evaluated also for parameter estimation. It is a plausible and straightforward procedure to conduct a model check of the V AR before adopting a loss function estimator. If the V AR is discredited by the data then a loss function estimator is used, otherwise the parameters are estimated by a likelihood based technique. We calculate the asymptotic prediction risk for such a pre-test procedure under the assumptions that the data are generated from a linear process that drifts toward the VAR as the sample size tends to infinity. The pre-test can avoid picking the inferior estimator when the stakes are high. This is confirmed by a small Monte Carlo study. A Bayesian interpretation of loss function estimation and the pre-test procedure is provided. A forecaster places non-zero prior probability on a reference model but finds it too onerous to calculate its posterior predictive distribution. Instead he chooses a prediction procedure based on the VAR that has a small integrated prediction risk. Download Paper