When Algorithms Lead & Humans Follow: Understanding the Dynamics of Algorithmic Conformity

28 May 2024, 14:00 
zoom & Room 206 
When Algorithms Lead & Humans Follow:  Understanding the Dynamics of Algorithmic Conformity

Join us with Zoom

When Algorithms Lead & Humans Follow:
Understanding the Dynamics of Algorithmic Conformity

Dr. Lior Zalmanson

 

Abstract:

As AI-human collaborations become integral to organizational processes, it is crucial to ensure that “humans-in-the-loop” retain independent judgment, to prevent errors and biases that may arise when relying solely on algorithmic recommendations. This study investigates the extent to which workers indeed exercise such judgment when collaborating with AI. We explore whether workers executing simple tasks, which they are capable of performing well unassisted, tend to override their own judgment to adhere to obviously erroneous algorithmic recommendations—a phenomenon we refer to as algorithmic conformity. This behavior is distinct from “justifiable” adherence to algorithmic recommendations (even if erroneous) in complex tasks, in which people may reasonably assume that the algorithm’s capabilities exceed their own. In a series of experiments simulating a realistic gig-work setting (n = 1,449), we show that workers engaged in simple image-classification tasks frequently adhere to obviously incorrect AI advice, to a greater extent than they adhere to identical advice provided by humans. Our main experiment shows that this tendency is driven by workers’ perceptions that algorithms possess superior capabilities (“authority of competence”). We also show that workers’ algorithmic-conformity tendencies are amplified when they attribute formal authority to algorithms (e.g., potential control over rewards or penalties). Subsequent experiments reveal that algorithmic conformity increases when AI’s formal authority is made more explicit, and that conformity tendencies persist in a task domain perceived as less well suited to AI (facial sentiment recognition). In the latter case, algorithmic conformity is driven by formal-authority perceptions (rather than competence). Finally, we find that individuals become less likely to conform to erroneous algorithmic advice when they perceive the real-life impact of their decisions as high (versus low). Our findings highlight the risks inherent to algorithmic collaboration, and the need to ensure that humans-in-the-loop exercise their own judgment, in gig-economy settings and beyond

Bio:

Dr. Lior Zalmanson is a senior lecturer at the Technology and Information Management Program at the Coller School of Management, Tel Aviv University. His research focuses on areas such as human-AI interaction, user engagement, algorithmic management, and the future of work. He has received awards and grants including an ERC Starting Grant, Fulbright Foundation Fellowship, GIF (German-Israel Foundation), Grant for the Web, and the Dan David Prize. His studies have been published in academic journals like MIS Quarterly, Academy of Management Journal, and the Journal of Business Ethics, as well as CSCW and CHI. His research has been covered in media outlets including The Times, Wall Street Journal, MIT Technology Review, Independent, and Harvard Business Review. Zalmanson was awarded the Association of Information Systems Researchers' Early Career Award in 2021 and received the Poets and Quants Best 40 under 40 MBA professors recognition and the INFORMS ISS 2022 Gordon H. Davis Young Scholar Award in 2022. Previously, he was an assistant professor at the University of Haifa, a postdoctoral Fulbright fellow at NYU, and a research fellow at the Metropolitan Museum Media Lab. He is also the founder of the Print Screen Festival in Israel and has worked as a digital artist, playwright, and screenwriter, with his VR work featured at the 2021 Tribeca Film Festival

Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing, Contact us as soon as possible >>