• Human Oversight and Aversion to AI Redistributive Decisions

    Damiano Paoli

    Abstract

    In many situations, people make decisions that affect others, and these other-regarding choices are shaped by their fairness preferences. At the same time, artificial intelligence (AI) is increasingly integrated into high-stakes decision-making, making human oversight crucial and normatively required.

    This paper investigates whether individuals are willing to accept an other-regarding decision made by someone else, and whether they are more or less willing to accept a decision made by an AI system rather than a human. It examines whether redistributive choices are revised differently depending on whether they were made by a human or by AI, and disentangles two behavioral mechanisms: a black-box effect stemming from uncertainty about the AI’s decision-making process, and intrinsic AI aversion reflecting a fundamental reluctance to rely on algorithmic judgment.

    The study combines a stylized theoretical framework of redistributive choice revision under incomplete information with an online experiment. The findings offer policy-relevant insights for oversight strategies and regulatory frameworks—such as the EU AI Act—by identifying behavioral barriers to effective AI integration.

  • Coordination and Leadership: The Impact of Artificial Intelligence

    Maria Bigoni , Damiano Paoli

    Status: submitted

    Abstract

    Coordination failure is an obstacle to collective efficiency in many economic and social settings. Leadership can mitigate this problem by providing a focal point through public communication, but the effectiveness of such communication relies on shared trust that the leader’s recommendations will be followed by others.

    As generative artificial intelligence (AI) becomes increasingly accessible, leaders may delegate communication tasks to automated systems. Whether such delegation weakens the coordinating role of leadership remains an open question. We investigate this issue through an online experiment based on the minimum effort game with leadership, where leaders could send either a human-written or a ChatGPT-generated message and followers were informed of the message’s source.

    Results show that coordination outcomes do not differ across treatments: AI- and human-generated messages are equally effective in promoting coordination on the Pareto-dominant equilibrium. Beliefs about other players’ behavior are also unaffected by the source of the message, suggesting that delegation to AI does not undermine strategic communication or followers’ trust in successful coordination.