Research
Working Papers
-
Human Oversight of AI Redistributive Decisions
Abstract
Artificial intelligence is increasingly integrated into high-stakes decisions—public benefits allocation, hiring, and beyond—making human oversight both practically valuable and normatively required. This paper investigates whether individuals exhibit AI aversion when overseeing redistributive decisions made by an artificial agent, and disentangles two potential mechanisms: the black-box effect, arising from uncertainty about how the AI reaches its decisions, and intrinsic AI aversion, a reluctance to rely on algorithmic judgment.
I develop a theoretical framework for the revision of redistributive choices under incomplete information and agent heterogeneity, and test its predictions in a two-session online experiment. The main finding is that subjects do not exhibit AI aversion: participants do not intervene more when they oversee an AI rather than a human, and their decisions are driven by the expected fairness cost of non-intervention rather than by who made the redistribution.
The results suggest that human oversight of AI other-regarding decisions is unlikely to generate excessive scrutiny, though this may also limit its effectiveness when AI decisions are biased or mistaken.
-
Coordination and Leadership: The Impact of Artificial Intelligence
Abstract
Coordination failure is an obstacle to collective efficiency in many economic and social settings. Leadership can mitigate this problem by providing a focal point through public communication, but the effectiveness of such communication relies on shared trust that the leader's recommendations will be followed by others.
As generative artificial intelligence (AI) becomes increasingly accessible, leaders may delegate communication tasks to automated systems. Whether such delegation weakens the coordinating role of leadership remains an open question. We investigate this issue through an online experiment based on the minimum effort game with leadership, where leaders could send either a human-written or a ChatGPT-generated message and followers were informed of the message's source.
Results show that coordination outcomes do not differ across treatments: AI- and human-generated messages are equally effective in promoting coordination on the Pareto-dominant equilibrium. Beliefs about other players' behavior are also unaffected by the source of the message, suggesting that delegation to AI does not undermine strategic communication or followers' trust in successful coordination.
Publications
-
Favouring Tax Compliance Through a Simple Automatic Payment Option: Evidence from a Lab Experiment
Abstract
This study investigates whether reducing compliance costs can improve tax compliance. We focus on the impact of automatic payment systems on tax evasion, with the aim of providing evidence relevant for minor indirect taxes, such as the vehicle tax collected by regional governments in Italy.
We design a laboratory experiment in which subjects play a Tax Evasion Game. Participants receive a fixed endowment and perform a real-effort task under time pressure, during which they can earn additional income but are required to pay a tax. While evasion yields a higher expected payoff than compliance, it entails the risk of a substantial fine if detected.
In the treatment condition, we introduce an automatic payment option that allows participants to commit to paying the tax before starting the task. We find that reducing the non-monetary burden of compliance significantly increases tax compliance. Finally, we elicit individual risk preferences and show that they do not account for the treatment effect.