2023 Conference on International Cyber Security | 7-8 November 2023
Register now

< Return to program overview

Panel 2

|

AI: Safeguarding or Usurping Democracy?

Anna George

Anna George is a Social Data Science doctoral student at the University of Oxford. She uses computational approaches to study online political behavior. Her research focuses on the message diffusion of alternative communities such as hate groups and political extremists. Her work has been published in scientific journals and featured in media outlets such as the BBC, CNN, and Reuters. Before joining the doctoral program at Oxford, Anna graduated with distinction from Oxford’s MSc in Social Data Science. Anna holds an M.S. in Industrial/Organizational-Social Psychology and a B.S. in Psychology, Second Major in Sociology, and minor in Statistics.

University webpage

Twitter/X: @annaraegeorge

Abstract

Keynote

Detecting Traits of Conspiratorial Thinking for Classification of Novel Topics

The transmission of conspiracy theories poses a significant concern given their potential to undermine public trust and increase societal divisions. These narratives, often sensationalist and unsupported by credible evidence, can spread misinformation and influence public opinion on critical issues. Timely detection of conspiracy theories is essential for governments and policymakers. Quick identification allows for proactive measures to prevent the spread of such theories, preserving public trust and providing accurate information to constituents. Currently most conspiracy detection models demand extensive manual labeling of data for each unique conspiracy theory. This process often spans several months, resulting in a model which scope is limited to the conspiracy theories present in its training data. Since new conspiratorial narratives can surface overnight, policymakers and governments cannot afford such delays. To address this challenge, our research takes an innovative approach to conspiratorial narrative detection by using AI, specifically Large Language Models (LLMs), and social science theory to train models to detect traits of conspiratorial thinking with less training data than traditional classification models, aiding in faster deployment of such models. Moreover, because the model is trained to focus on traits of conspiratorial thinking, rather than domain specific knowledge, it is able to generalize to new, unseen topics. Specifically, we utilize the SetFit framework, which uses few-shot learning and contrastive techniques, to train our model with a limited number of examples. The model was then tested on an unseen topic to evaluate its generalizability. Our findings indicate that the model achieves high performance even when classifying topics not included in the training data. This demonstrates the model's potential to identify conspiratorial thinking in emerging topics, providing policymakers, social media platforms, and researchers with an effective tool to detect emerging conspiratorial narratives.