Flashing the Hazard Lights: Interrogating Discourses of Disruptive Algorithmic Technologies in LIS Education

ORCID

Tyler Youngman: 0000-0003-4665-8337

Sarah Appedu: 0000-0002-5405-7016

Zhasmina Tacheva: 0000-0003-3859-5823

Beth Patin: 0000-0003-0498-4150

Document Type

Conference Document

Date

10-3-2023

Keywords

critical data studies, social justice informatics, critical librarianship, library and information history, chatgpt

Disciplines

Library and Information Science

Description/Abstract

The increasing relevance of service algorithms and emerging technologies has landed many professions at a ‘disruptive’ crossroads. With the popular emergence of ChatGPT, a large language model from OpenAI designed to interact with users through conversations, discourses surrounding its ubiquity, potentiality, and adoption have captivated audiences. We argue that the unpredictable nature and changing capabilities of ChatGPT and other algorithmic technologies are another critical juncture in the advancement of LIS education. When given a library-oriented prompt, ChatGPT manifested biases that we normally interrogate in our ethical and professional conduct in the delivery of library services, further demonstrating the risk of algorithmic technologies in reproducing and amplifying marginalization and replicating harm. Hence, we ‘flash the hazard lights’, so to speak, and urge a more critical analysis and precautionary consideration of the social, technological, and cultural harms enabled or perpetuated by the uncritical adoption of ChatGPT and other algorithmic technologies.

Share

COinS