Paper # Author Title
We characterize the optimal linear tax on capital in an Overlapping Generations model with two period lived households facing uninsurable idiosyncratic labor income risk. The Ramsey government internalizes the general equilibrium feedback of private precautionary saving. For logarithmic utility our full analytical solution of the Ramsey problem shows that the optimal aggregate saving rate is independent of income risk. The optimal time-invariant tax on capital is increasing in income risk. Its sign depends on the extent of risk and on the Pareto weight of future generations. If the Ramsey tax rate that maximizes steady state utility is positive, then implementing this tax rate permanently generates a Pareto-improving transition even if the initial equilibrium is dynamically efficient. We generalize our results to Epstein-Zin-Weil utility and show that the optimal steady state saving rate is increasing in income risk if and only if the intertemporal elasticity of substitution is smaller than 1. Download Paper
We study two wage bargaining games between a …rm and multiple workers. We revisit the bargaining game proposed by Stole and Zwiebel (1996a). We show that, in the unique Subgame Perfect Equilibrium, the gains from trade captured by workers who bargain earlier with the firm are larger than those captured by workers who bargain later, as well as larger than those captured by the firm. The resulting equilibrium payffs are different from those reported in Stole and Zwiebel (1996a) as they are not the Shapley values. We propose a novel bargaining game, the Rolodex game, which follows a simple and realistic protocol. In the unique no-delay Subgame Perfect Equilibrium of this game, the payoffs to the firm and to the workers are their Shapley values. Download Paper
We study endogenous team formation inside research organizations through the lens of a one-sided matching model with non-cooperative after-match information production. Using our characterization of the equilibria of the production game, we show that equilibrium sorting of workers into teams may be inefficient. Asymmetric effort inefficiency occurs when a productive team is disrupted by a worker who chooses to join a less productive team because there is an equilibrium played inside that team in which she exerts relatively less effort. Stratification inefficiency occurs when a productive team forms, but generates a significant negative externality on the productivity of other teams. Download Paper
A single unit of a good is to be sold by auction to one of many potential buyers. There are two equally likely states of the world. Potential buyers receive noisy signals of the state of the world. The accuracies of buyers ’signals may di¤er. A buyer’s valuation is the sum of a common value component that depends on the state and an idiosyncratic private value component independent of the state. The seller knows nothing about the accuracies of the signals or about buyers’ beliefs about the accuracies. It is common knowledge among buyers that the accuracies of the signals are conditionally independent and uniformly bounded below 1 and above 1=2, and nothing more. We demonstrate a modifi…ed second price auction that has the property that, for any " > 0; the seller’s expected revenue will be within " of the highest buyer expected value when the number of buyers is sufficiently large and buyers make undominated bids. Download Paper
We study preferences over lotteries that pay a speci fic prize at uncertain future dates: time lotteries. The standard model of time preferences, Expected Discounted Utility (EDU), implies that individuals must be risk seeking in this case. As a motivation, we show in an incentivized experiment that most subjects exhibit the opposite behavior, i.e., they are risk averse over time lotteries (RATL). We then make two theoretical contributions. First, we show that RATL can be captured by a generalization of EDU that is obtained by keeping the postulates of Discounted Utility and Expected Utility. Second, we introduce a new property termed Stochastic Impatience, a risky counterpart of standard Impatience, and show that not only the model above, but also substantial generalizations that allow for non-Expected Utility and non exponential discounting, cannot jointly accommodate it and RATL, showing a fundamental tension between the two. Download Paper
We take a machine learning approach to the problem of predicting initial play in strategic-form games, with the goal of uncovering new regularities in play and improving the predictions of existing theories. The analysis is implemented on data from previous laboratory experiments, and also a new data set of 200 games played on Mechanical Turk. We use two approaches to uncover new regularities in play and improve the predictions of existing theories. First, we use machine learning algorithms to train prediction rules based on a large set of game features. Examination of the games where our algorithm predicts play correctly, but the existing models do not, leads us to introduce a risk aversion parameter that we find significantly improves predictive accuracy. Second, we augment existing empirical models by using play in a set of training games to predict how the models' parameters vary across new games. This modified approach generates better out-of-sample predictions, and provides insight into how and why the parameters vary. These methodologies are not special to the problem of predicting play in games, and may be useful in other contexts. Download Paper
When testing a theory, we should ask not just whether its predictions match what we see in the data, but also about its \completeness": how much of the predictable variation in the data does the theory capture? Defining completeness is conceptually challenging, but we show how methods based on machine learning can provide tractable measures of completeness. We also identify a model domain - the human perception and generation of randomness - where measures of completeness can be feasibly analyzed; from these measures we discover there is significant structure in the problem that existing theories have yet to capture. Download Paper
We study a model of sequential learning, where agents choose what kind of information to acquire from a large, fixed set of Gaussian signals with arbitrary correlation. In each period, a short-lived agent acquires a signal from this set of sources to maximize an individual objective. All signal realizations are public. We study the community's asymptotic speed of learning, and characterize the set of sources observed in the long run. A simple property of the correlation structure guarantees that the community learns as fast as possible, and moreover that a \best" set of sources is eventually observed. When the property fails, the community may get stuck in an inefficient set of sources and learn (arbitrarily) slowly. There is a specific, diverse set of possible final outcomes, which we characterize. Download Paper
Consider a decision-maker who dynamically acquires Gaussian signals that are related by a completely flexible correlation structure. Such a setting describes information acquisition from news sources with correlated biases, as well as aggregation of complementary information from specialized sources. We study the optimal sequence of information acquisitions. Generically, myopic signal acquisitions turn out to be optimal at sufficiently late periods, and in classes of informational environments that we describe, they are optimal from period 1. These results hold independently of the decision problem and its (endogenous or exogenous) timing. We apply these results to characterize dynamic information acquisition in games. Download Paper
About two thirds of the political committees registered with the Federal Election Commission do not self identify their party affiliations. In this paper we propose and implement a novel Bayesian approach to infer about the ideological affiliations of political committees based on the network of the financial contributions among them. In Monte Carlo simulations, we demonstrate that our estimation algorithm achieves very high accuracy in recovering their latent ideological affiliations when the pairwise difference in ideology groups' connection patterns satisfy a condition known as the Chernoff-Hellinger divergence criterion. We illustrate our approach using the campaign finance record in 2003-2004 election cycle. Using the posterior mode to categorize the ideological affiliations of the political committees, our estimates match the self reported ideology for 94.36% of those committees who self reported to be Democratic and 89.49% of those committees who self reported to be Republican. Download Paper
We model the dynamics of discrimination and show how its evolution can identify the underlying cause. We test these theoretical predictions in a field experiment on a large online platform where users post content that is evaluated by other users on the platform. We assign posts to accounts that exogenously vary by gender and history of evaluations. With no prior evaluations, women face significant discrimination, while following a sequence of positive evaluations, the direction of discrimination reverses: posts by women are favored over those by men. According to our theoretical predictions, this dynamic reversal implies discrimination driven by biased beliefs. Download Paper
We characterize the outcomes of the tertiary education market in a context where borrowing constraints bind, there is a two-tier college system operating under monopolistic competition in which colleges differ by the quality offered and returns to education depend on the quality of the school attended. College quality, tuition prices, acceptance cut-offs and education demand are all determined in a general equilibrium model and depend on the borrowing constraints faced by households. Our main finding shows that subsidized student loan policies can lead to a widening gap in the quality of services provided by higher education institutions. This happens because the demand for elite institutions unambiguously increases when individuals can borrow. This does not happen in non-elite institutions, since relaxing borrowing constraints makes some individuals move from non-elite to elite institutions. The higher increase in demand for elite institutions allows them to increase prices and investment per student. As investment and average student ability are complementary inputs in the quality production function, elite universities also increase their acceptance cut-offs. In this new equilibrium, the differentiation of the product offered by colleges increases, where elite universities provide higher quality to high-ability students and non-elite universities offer lower quality to less-able students. We illustrate the main results through a numerical exercise applied to Colombia, which implemented massive student loan policies during the last decade and experienced an increase in the gap of quality of education provided by elite and non-elite universities. We show that the increase in the quality gap can be a by-product of the subsidized loan policies. Such results show that, when analyzed in a general equilibrium setting, subsidized loan policies can have regressive effects on the income distribution. Download Paper
We study wealth disparities in the formation of anthropometrics, cognitive skills and socio-emotional skills. We use a sample of preschool and early school children in Chile. We extend the previous literature by using longitudinal data, which allow us to study the dynamics of child growth and skills formation. Also, we include information on mother's and father's schooling attainment and mother's cognitive ability. We find that there are no significant anthropometric differences favoring the better-off at birth (and indeed length differences at birth to the disadvantage of the better-off), but during the first 30 months of life wealth disparities in height-for-age z scores (HAZ) favoring the better-off emerge. Moreover, we find wealth disparities in cognitive skills favoring the better-off emerge early in life and continue after children turn 6 years of age. We find no concurrent wealth disparities for and socio-emotional skills. Thus, even though the wealth disparities in birth outcomes if anything favor the poor, significant disparities favoring the rich emerge in the early post-natal period. Mother's education and cognitive ability also are significantly associated with disparities in skill formation. Download Paper
Are nominal prices sticky because menu costs prevent sellers from continuously adjusting their prices to keep up with inflation or because search frictions make sellers indifferent to any real price over some non-degenerate interval? The paper answers the question by developing and calibrating a model in which both search frictions and menu costs may generate price stickiness and sellers are subject to idiosyncratic shocks. The equilibrium of the calibrated model is such that sellers follow a (Q,S,s) pricing rule: each seller lets inflation erode the effective real value of the nominal prices until it reaches some point s and then pays the menu cost and sets a new nominal price with an effective real value drawn from a distribution with support [S,Q], with s < S < Q. Idiosyncratic shocks short-circuit the repricing cycle and may lead to negative price changes. The calibrated model reproduces closely the properties of the empirical price and price-change distributions. The calibrated model implies that search frictions are the main source of nominal price stickiness. Download Paper
Despite the clear success of forecast combination in many economic environments, several important issues remain incompletely resolved. The issues relate to selection of the set of forecasts to combine, and whether some form of additional regularization (e.g., shrinkage) is desirable. Against this background, and also considering the frequently-found superiority of simple-average combinations, we propose LASSO-based procedures that select and shrink toward equal combining weights. We then provide an empirical assessment of the performance of our "egalitarian LASSO" procedures. The results indicate that simple averages are highly competitive, and that although out-of-sample RMSE improvements on simple averages are possible in principle using our methods, they are hard to achieve in real time, due to the intrinsic difficulty of small-sample real-time cross validation of the LASSO tuning parameter. We therefore propose alternative direct combination procedures, most notably \best average" combination, motivated by the structure of egalitarian LASSO and the lessons learned, which do not require choice of a tuning parameter yet outperform simple averages. Download Paper
We argue that political distribution risk is an important driver of aggregate fluctuations. To that end, we document signifucant changes in the capital share after large political events, such as political realignments, modifications in collective bargaining rules, or the end of dictatorships, in a sample of developed and emerging economies. These policy changes are associated with significant fluctuations in output and asset prices. Using a Bayesian proxy-VAR estimated with U.S. data, we show how distribution shocks cause movements in output, unemployment, and sectoral asset prices. To quantify the importance of these political shocks for the U.S. as a whole, we extend an otherwise standard neoclassical growth model. We model political shocks as exogenous changes in the bargaining power of workers in a labor market with search and matching. We calibrate the model to the U.S. corporate non-financial business sector and we back up the evolution of the bargaining power of workers over time using a new methodological approach, the partial filter. We show how the estimated shocks agree with the historical narrative evidence. We document that bargaining shocks account for 34% of aggregate fluctuations. Download Paper
Sovereign bonds are highly divisible, usually of uncertain quality, and auctioned in large lots to a large number of investors. This leads us to assume that no individual bidder can affect the bond price, and to develop a tractable Walrasian theory of Treasury auctions in which investors are asymmetrically informed about the quality of the bond. We characterize the price of the bond for different degrees of asymmetric information, both under discriminatory-price (DP) and uniform-price (UP) protocols. We endogenize information acquisition and show that DP protocols are likely to induce multiple equilibria, one of which features asymmetric information, while UP protocols are unlikely to sustain equilibria with asymmetric information. This result has welfare implications: asymmetric information negatively affects the level, dispersion and volatility of sovereign bond prices, particularly in DP protocols. Download Paper
This paper argues that institutions and political party systems are simultaneously determined. A large change to the institutional framework, such as the creation of the euro by a group of European countries, will realign -after a transition period- the party system as well. The new political landscape may not be compatible with the institutions that triggered it. To illustrate this point, we study the case of the euro and how the party system has evolved in Southern and Northern European countries in response to it. Download Paper
We study stochastic choice as the outcome of deliberate randomization. After first deriving a general representation of a stochastic choice function with such property, we proceed to characterize a model in which the agent has preferences over lotteries that belong to the Cautious Expected Utility class (Cerreia Vioglio et al., 2015), and the stochastic choice is the optimal mix among available options. This model links stochasticity of choice and the phenomenon of Certainty Bias, with both behaviors stemming from the same source: multiple utilities and caution. We show that this model is behaviorally distinct from models of Random Utility, as it typically violates the property of Regularity, shared by all of them. Download Paper
We study theoretically and empirically how consumers in an individual private long-term health insurance market with front-loaded contracts respond to newly mandated portability requirements of their old-age provisions. To foster competition, effective 2009, German legislature made the portability of standardized old-age provisions mandatory. Our theoretical model predicts that the portability reform will increase internal plan switching. However, under plausible assumptions, it will not increase external insurer switching. Moreover, the portability reform will enable unhealthier enrollees to reoptimize their plans. We find confirmatory evidence for the theoretical predictions using claims panel data from a big private insurer. Download Paper
This paper considers infinite-horizon stochastic games with hidden states and hidden actions. The state changes over time, players observe only a noisy public signal about the state each period, and actions are private information. In this model, uncertainty about the monitoring structure does not disappear. We show how to construct an approximately efficient equilibrium in a repeated Cournot game. Then we extend it to a general case and obtain the folk theorem using ex-post equilibria under a mild condition. Download Paper
Since the chance of swaying the outcome of an election by voting is usually very small, it cannot be that voters vote solely for that purpose. So why do we vote? One explanation is that smarter or more educated voters have access to better information about the candidates, and are concerned with appearing to have better information about the candidates through their choice of whether to vote or not. If voting behavior is publicly observed then more educated voters may vote to signal their education, even if the election itself is inconsequential and the cost of voting is the same across voters. I explore this explanation with a model of voting where players are unsure about the importance of swaying the election and high type players receive more precise signals. I introduce a new information ordering, a weakening of Blackwell's order, to formalize the notion of information precision. Once voting has occurred, players visit a labor market and are paid the expected value of their type, conditioning only on their voting behavior. I find that in very large games, voter turnout and the signaling return to voting remains high even though the chance of swaying the election disappears and the cost of voting is the same for all types. I explore generalizations of this model, and close by comparing the stylized features of voter turnout to the features of the model. Download Paper
A fad is something that is popular for a time, then unpopular. For example, in the 1960s tailfins on cars were popular, in the 1970s they were not. I study a model in which fads are driven through the channel of imperfect information. Some players have better information about past actions of other players, and all players have preferences for choosing the same actions as well-informed players. In equilibrium, better informed (high-type) players initially pool on a single action choice. Over time, the low-type players learn which action the high-type players are pooling on, and start to mimic them. Once a tipping point is reached, the high-type players switch to a dfferent action, and the process repeats. I explicitly compute equilibria for a specific parameterization of the model. Low-type players display instrumental preferences for conformity, choosing actions which appear more popular, while high-type players sometimes coordinate on actions which appear unpopular. Improving the quality of information to low-type players does not improve their payoffs, but increases the rate at which high-type players switch between actions. Download Paper
A safe asset’s real value is insulated from shocks, including declines in GDP from rare macroeconomic disasters. However, in a Lucas-tree world, the aggregate risk is given by the process for GDP and cannot be altered by the creation of safe assets. Therefore, in the equilibrium of a representative-agent version of this economy, the quantity of safe assets will be nil. With heterogeneity in coefficients of relative risk aversion, safe assets can take the form of private bond issues from low-risk-aversion to high-risk-aversion agents. The model assumes Epstein-Zin/Weil preferences with common values of the intertemporal elasticity of substitution and the rate of time preference. The model achieves stationarity by allowing for random shifts in coefficients of relative risk aversion. We derive the equilibrium values of the ratio of safe to total assets, the shares of each agent in equity ownership and wealth, and the rates of return on safe and risky assets. In a baseline case, the steady-state risk-free rate is 1.0% per year, the unlevered equity premium is 4.2%, and the quantity of safe assets ranges up to 15% of economy-wide assets (comprising the capitalized value of GDP). A disaster shock leads to an extended period in which the share of wealth held by the low-risk-averse agent and the risk-free rate are low but rising, and the ratio of safe to total assets is high but falling. In the baseline model, Ricardian Equivalence holds in that added government bonds have no effect on rates of return and the net quantity of safe assets. Surprisingly, the crowding-out coefficient for private bonds with respect to public bonds is not 0 or -1 but around -0.5, a value found in some existing empirical studies. Download Paper
We explore model misspecification in an observational learning framework. Individuals learn from private and public signals and the actions of others. An agent's type specifies her model of the world. Misspecified types have incorrect beliefs about the signal distribution, how other agents draw inference and/or others' payoffs. We establish that the correctly specified model is robust in that agents with approximately correct models almost surely learn the true state asymptotically. We develop a simple criterion to identify the asymptotic learning outcomes that arise when misspecification is more severe. Depending on the nature of the misspecification, learning may be correct, incorrect or beliefs may not converge. Different types may asymptotically disagree, despite observing the same sequence of information. This framework captures behavioral biases such as confirmation bias, false consensus effect, partisan bias and correlation neglect, as well as models of inference such as level-k and cognitive hierarchy. Download Paper
This paper constructs individual-specific density forecasts for a panel of firms or households using a dynamic linear model with common and heterogeneous coeficients and cross-sectional heteroskedasticity. The panel considered in this paper features large cross-sectional dimension (N) but short time series (T). Due to short T, traditional methods have difficulty in disentanglingthe heterogeneous parameters from the shocks, which contaminates the estimates of the heterogeneous parameters. To tackle this problem, I assume that there is an underlying distribution of heterogeneous parameters, model this distribution nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors, and then estimate this distribution by pooling the information from the whole cross-section together. I develop a simulation-based posterior sampling algorithm specifically addressing the nonparametric density estimation of unobserved heterogeneous parameters. I prove that both the estimated common parameters and the estimated distribution of the heterogeneous parameters achieve posterior consistency, and that the density forecasts asymptotically converge to the oracle forecast, an (infeasible) benchmark that is defined as the individual-specific posterior predictive distribution under the assumption that the common parameters and the distribution of the heterogeneous parameters are known. Monte Carlo simulations demonstrate improvements in density forecasts relative to alternative approaches. An application to young firm dynamics also shows that the proposed predictor provides more accurate density predictions. Download Paper
We analyze how the life settlement market - the secondary market for life insurance - may affect consumer welfare in a dynamic equilibrium model of life insurance with one-sided commitment and overconfident policyholders. As in Daily et al. (2008) and Fang and Kung (2010), policyholders may lapse their life insurance policies when they lose their bequest motives; but in our model the policyholders may underestimate their probability of losing their bequest motive, or be overconfident about their future mortality risks. For the case of overconfidence with respect to bequest motives, we show that in the absence of life settlement overconfident consumers may buy too much" reclassiffication risk insurance for later periods in the competitive equilibrium. In contrast, when consumers are overconfident about their future mortality rates in the sense that they put too high a subjective probability on the low-mortality state, the competitive equilibrium contract in the absence of life settlement exploits the consumer bias by offering them very high face amounts only in the low-mortality state. In both cases, life settlement market can impose a discipline on the extent to which overconfident consumers can be exploited by the primary insurers. We show that life settlement may increase the equilibrium consumer welfare of overconfident consumers when they are sufficiently vulnerable in the sense that they have a sufficiently large intertemporal elasticity of substitution of consumption. Download Paper
Received auction theory prescribes that a reserve price which maximizes expected profit should be no less than the seller's own value for the auctioned object. In contrast, a common empirical observation is that many auctions have reserve prices set below seller's values, even at zero. This paper revisits the theory to find a potential resolution of the puzzle for second-price auctions. The main result is that an optimal reserve price may be less than the seller's value if bidders are risk averse and have interdependent values. Moreover, the resulting outcome may be arbitrarily close to that of an auction that has no reserve price, an absolute auction. Download Paper
We use variance decompositions from high-dimensional vector autoregressions to characterize connectedness in 19 key commodity return volatilities, 2011-2016. We study both static (full-sample) and dynamic (rolling-sample) connectedness. We summarize and visualize the results using tools from network analysis. The results reveal clear clustering of commodities into groups that match traditional industry groupings, but with some notable differences. The energy sector is most important in terms of sending shocks to others, and energy, industrial metals, and precious metals are themselves tightly connected. Download Paper
Quantitative analysis of a New Keynesian model with the Bernanke-Gertler accelerator and risk shocks shows that violations of Tinbergen’s Rule and strategic interaction between policy-making authorities undermine significantly the effectiveness of monetary and financial policies. Separate monetary and financial policy rules, with the latter subsidizing lenders to encourage lending when credit spreads rise, produce higher welfare and smoother business cycles than a monetary rule augmented with credit spreads. The latter yields a tight money-tight credit regime in which the interest rate responds too much to inflation and not enough to adverse credit conditions. Reaction curves for the choice of policy-rule elasticity that minimizes each authority’s loss function given the other authority’s elasticity are nonlinear, reflecting shifts from strategic substitutes to complements in setting policy-rule parameters. The Nash equilibrium is significantly inferior to the Cooperative equilibrium, both are inferior to a first-best outcome that maximizes welfare, and both produce tight money-tight credit regimes. Download Paper
A principal wishes to distribute an indivisible good to a population of budget-constrained agents. Both valuation and budget are an agent’s private information. The principal can inspect an agent’s budget through a costly verification process and punish an agent who makes a false statement. I characterize the direct surplus-maximizing mechanism. This direct mechanism can be implemented by a two-stage mechanism in which agents only report their budgets. Specifically, all agents report their budgets in the first stage. The principal then provides budget dependent cash subsidies to agents and assigns the goods randomly (with uniform probability) at budget-dependent prices. In the second stage, a resale market opens, but is regulated with budget-dependent sales taxes. Agents who report low budgets receive more subsidies in their initial purchases (the first stage), face higher taxes in the resale market (the second stage) and are inspected randomly. This implementation exhibits some of the features of some welfare programs, such as Singapore’s housing and development board. Download Paper
Suppose that an analyst observes inconsistent choices from a decision maker. Can the analyst determine whether this inconsistency arises from choice error (imperfect maximization of a single preference) or from preference heterogeneity (deliberate maximization of multiple preferences)? I model choice data as generated from context-dependent preferences, where contexts vary across observations, and the decision maker errs with small probability in each observation. I show that (a) simultaneously minimizing the number of inferred preferences and the number of unexplained observations can exactly recover the correct number of preferences with high probability; (b) simultaneously minimizing the richness of the set of preferences and the number of unexplained observations can exactly recover the choice implications of the decision maker's true preferences with high probability. These results illustrate that selection of simple models, appropriately defined, is a useful approach for recovery of stable features of preference. Download Paper
This paper proposes a foundation for heterogeneous beliefs in games, in which disagreement arises not because players observe different information, but because they learn from common information in different ways. Players may be misspecified, and may moreover be misspecified about how others learn. The key assumption is that players nevertheless have some common understanding of how to interpret the data; formally, players have common certainty in the predictions of a class of learning rules. The common prior assumption is nested as the special case in which this class is a singleton. The main results characterize which rationalizable actions and Nash equilibria can be predicted when agents observe a finite quantity of data, and how much data is needed to predict different solutions. This number of observations depends on the degree of strictness of the solution and the \complexity" of inference from data. Download Paper
What do we know about the economic consequences of labor market regulations? Few economic policy questions are as contentious as labor market regulations. The effects of minimum wages, collective bargaining provisions, and hiring/firing restrictions generate heated debates in the U.S. and other advanced economies. And yet, establishing empirical lessons about the consequences of these regulations is surprisingly difficult. In this paper, I explain some of the reasons why this is the case, and I critically review the recent findings regarding the effects of minimum wages on employment. Contrary to often asserted statements, the preponderance of the evidence still points toward a negative impact of permanently high minimum wages. Download Paper
This paper develops a theory of asset intermediation as a pure rent extraction activity. Agents meet bilaterally in a random fashion. Agents differ with respect to their valuation of the asset's dividends and with respect to their ability to commit to take-it-or-leave-it offers. In equilibrium, agents with commitment behave as intermediaries, while agents without commitment behave as end users. Agents with commitment intermediate the asset market only because they can extract more of the gains from trade when reselling or repurchasing the asset. We study the extent of intermediation as a rent extraction activity by examining the agent's decision to invest in a technology that gives them commitment. We find that multiple equilibria may emerge, with different levels of intermediation and with lower welfare in equilibria with more intermediation. We find that a decline in trading frictions leads to more intermediation and typically lower welfare, and so does a decline in the opportunity cost of acquiring commitment. A transaction tax can restore efficiency. Download Paper