Mustafa Dogan
Interests: Microeconomic Theory, Contract Theory, Mechanism Design, Industrial Organization.

Job Market Paper

Dynamic Incentives For Self-Monitoring

This paper studies a dynamic information acquisition problem within a regulation framework. Each period, the agent (he) would like to undertake a new project, which may cause social harm. He can acquire costly information about the type of the projects by self-monitoring, but the efforts spent on self-monitoring are only observed by him. Each period, the regulator (she) decides whether to ask the agent to self-monitor or not followed by the choice of project approval. There are no monetary transfers. Instead, the regulator uses future regulatory behavior for incentive provision. When the regulator has full commitment power, the regulator can induce costly self-monitoring and revelation of ''bad news'' in the initial phase of the optimal policy. During this phase, the agent is promised a higher continuation utility (in the form of future regulatory approval) each time he discloses bad news. Otherwise, he is downgraded to a lower continuation utility in order to incentivize information acquisition. If the regulator internalizes self-monitoring costs, the agent is either blacklisted or whitelisted in the long run. When she does not internalize these costs, blacklisting is replaced by a temporary probation state, and whitelisting becomes the unique long run outcome. This result suggests that whitelisting, which may appear to be a form of regulatory capture, may instead be a consequence of optimal policy. When the regulator has limited commitment power in the sense that she cannot commit to a policy with a negative continuation value, the results change remarkably. If the expected social harm of a project is higher than its economic benefits, whitelisting disappears. In this case, if the regulator does not internalize the self-monitoring costs, the policy never reaches a stable outcome and fluctuates over time.

Research

Product Upgrades and Posted Prices

This paper considers the dynamic pricing problem of a durable good monopolist with commitment power, when a new version of the good is expected at some point in the future. The new version of the good is superior to the existing one, bringing a higher flow utility. When the arrival is a stationary stochastic process, the corresponding optimal price path is shown to be constant for both versions of the good, hence there is no delay on purchases and time is not used to discriminate over buyers, which is in line with the literature. However, if the arrival of the new version occurs at a commonly known deterministic date, then the price path may decrease over time, resulting in delayed purchases. For both arrival processes, posted prices is a sub-optimal selling mechanism. The optimal mechanism involves bundling of both versions of the good and selling them only together, which can easily be implemented by selling the initial version of the good with a replacement guarantee.

Man vs. the Machine: How Automation can Reduce Team Productivity

( with Pinar YIldirim)
In this study we are concerned with the rational reasons for why in human-machine systems, the productivity of the man can be lower compared to the productivity of the man in the human-human systems. We explain the observation by the distortion of incentives which is required to motivate the remaining men in workplace. We develop a model based on Che and Yoo (2001) and study two possible incentive contracts: contracts that reward employees when their peers also exert effort (joint performance evaluation or JPE) and contracts that reward employees when they perform better than their peers (relative performance evaluation or RPE). These two regimes consider the wage contracts we observe in real life which are either created based on collaboration or competition between individuals in teams. We show that replacement of humans with the machine results in three major shifts in the workplace. First, one or multiple units of stochastic human effort is replaced with the machine effort which is either constant or less stochastic relative to the human effort. Second, machines are generally adopted due to their cost benefits, implying that the cost of conducting a task is generally lower with machines compared to humans. Third, a machine can not retaliate against its peer which is taking advantage of the effort it demonstrates, and the reasons for lower productivity of human-machine teams stem from this property. For these reasons, in some combinations firms may prefer man and machine combination teams less often.

Divide and Rule

(Work In Progress)
I consider a strategic information transmission framework where an informed third party expert provides reports to two decision makers who want to coordinate their decisions, and also would like to adapt to the underlying state variable. The expert is perfectly informed about the current state, but has different preferences from the decision makers over the composition of the decisions. The expert can exploit the lack of communication between the decision makers and send them private messages about the state variable. This creates a situation in which the decision makers need to incorporate their higher order beliefs in making their decisions. Effectively, the expert designs a global game between the decision makers by constructing their private messages appropriately, and tilts their decisions towards her preferred composition. I show that when the expert is biased towards the status-quo, she can induce her preferred composition for infinitely many different bias levels.

Strategic Ignorance

(Work In Progress)
This paper provides an explanation for strategic ignorance. I consider a Bayesian persuasion framework involving a decision maker, and an expert. There is an underlying uncertainty about a two-dimensional state space effecting the optimal decision of the decision maker. First, the decision maker chooses a signal structure about the first state variable, and then upon observing this choice, the expert chooses a signal structure regarding the second state variable. Finally, the decision maker observes the signal realizations and makes her decision. I show that the decision maker leaves herself ignorant by choosing a signal structure that reveals imperfect information about the first state variable despite being able to learn it perfectly. By doing so, she incentivizes the expert to choose a more informative signal which compensates for her strategic ignorance. In other words, she persuades the persuader for revealing better information.

Teaching Experience

Instructor
Introductory Economics

Teaching Assistant
Microeconomic Theory I (Wharton Graduate)
Game Theory
Intermediate Microeconomics
Intermediate MacroEconomics
Business Economics and Public Policy

References

George J. Mailath
gmailath@econ.upenn.edu
215-898-7908

Steven Matthews
stevenma@econ.upenn.edu
215-898-7749

Mallesh M. Pai
mallesh.pai@rice.edu
773-273-9684

Status

I am on the job market and will be available for interviews during the AEA meetings in Chicago from 1/6 to 1/8.