Date of Award
Master of Arts (MA)
Artificial Intelligence, Digital Warfare, Human Computer Interaction, Medium Theory, Predictive Analytics, Propaganda
This research study seeks to understand how AI-based chatbots can potentially be leveraged as a tool in a PSYOP. This study is methodologically driven as it employs validated scales concerning suggestibility and human-computer interaction to assess how participants interact with a specific AI chatbot, Replika. Recent studies demonstrate the capability of GPT-based analytics to influence user’s moral judgements, and this paper is interested in exploring why. Results will help draw conclusions regarding human interaction with predictive analytics (in this case a free GPT-based chatbot, Replika) to understand if suggestibility (how easily influenced someone generally is) impacts the overall usability of AI chatbots. This project will help assess how much of a concern predictive AI chatbots should be considered as virtual AI influencers and other bot-based propaganda modalities emerge in the contemporary media environment. This study uses the CASA paradigm, medium theory, and Boyd’s theory of conflict to explore how factors that often drive human computer interaction— like anthropomorphic autonomy and suspension of disbelief— potentially relate to suggestibility or chatbot usability. Overall, this study is interested in specifically exploring if suggestion can predict usability in AI chatbots.
Smith, Phoebe Anne, "If I Can't Predict My Future, Why Can AI? Exploring Human Interaction with Predictive Analytics" (2023). Theses - ALL. 719.