When and How to Reward Bad News (with Aditya Kuvalekar)
We examine when and how to reward the bearer of bad news in a dynamic principal-agent relationship involving experimentation. The principal incurs the cost of experimentation while the agent receives rent while experimenting. The agent divides his effort between searching for conclusive good news and conclusive bad news about project quality. Conclusive good (bad) news establishes that the project quality is high (low). In the beginning, the principal commits to reward conditional on the type of news. At each instant, the principal observes the agent’s allocation and any realized news and makes a firing decision. We show that the principal optimal Markov Perfect Equilibrium features a stark reward structure – either the principal does not reward the bearer of bad news at all or rewards the bearer of either type of news equally. When the cost of experimentation is high and the technology used for searching for bad news is very informative, the principal rewards the bearer of bad news. When the technology used for searching for good news is very informative, the principal does not reward the bearer of bad news. Our results are consistent with the growing push towards rewards for finding bugs through “Bug Bounty Programs" in the technology sector.
Supervising to Motivate
I study a dynamic principal-agent relationship in which the principal invests costly resources in a project of uncertain quality to induce costly effort from an agent. The principal observes the output from the project privately and can be either informed (has learned that project quality is high) or uninformed. The agent learns about project quality through the investments made by the principal. The principal wants to invest less when pessimistic about project quality; however, the agent demands higher investment when pessimistic to exert effort. The principal faces the trade-off between investing optimally and transmitting information about project quality to the agent. The principal's optimal equilibrium features full information transmission when the uninformed principal has high beliefs (probability that project quality is high) and no information transmission at low beliefs. The informed principal may invest at sub-optimally high levels early in the relationship, but eventually, optimality is restored. That is, the principal's optimal equilibrium may exhibit distortions in the short run but not in the long run.
Repeated Information Elicitation with Observed Payoffs (Draft coming soon)
I study a repeated cheap talk environment in which a principal who has state-dependent preferences chooses an action in every period based on the recommendation of an agent who is better informed (not perfectly) about the state. Both action and state spaces are binary and the state is i.i.d. distributed in every period. The agent has state independent preferences and prefers one action over the other. At the end of every period, the realized state is revealed to both parties. I show that under full commitment, in the principal’s optimal mechanism if the agent recommends his preferred action, the agent may be punished even if the recommendation is correct. However, if the agent recommends his non-preferred action, the principal rewards the agent ignoring the realization of the state.
Instructor, University of Pennsylvania:
Introductory Macroeconomics (Summer 2014)
Teaching Assistant, University of Pennsylvania:
Public Economics (Graduate)
Teaching assistant for Prof. Andrew Postlewaite (Fall 2014-2015)
Teaching assistant for Prof. Rebecca Stein (Fall 2013-2017)
Game Theory (Honors)
Teaching assistant for Prof. Andrew Postlewaite (Fall 2014-2017)
Managerial Economics (Wharton)
Teaching assistant for Prof. Jose Miguel Abito (Spring 2018)
Microeconomic Theory, Information Economics, Dynamic Games and Contracts
George J. Mailath