Academia.eduAcademia.edu

Synthetic Authenticity: Performativity and Enregisterement in AI-generated Language from Bot Conversations

Abstract

This paper centers on the analysis of an AI-generated conversation created using Google’s GEMMA Language Model. Notebook LM, an advanced generative AI tool, allows the synthesis of ideas and generates dialogue based on multiple source texts. The conversation was crafted as a synthetic interaction between virtual entities, designed to represent key arguments and insights from three academic articles spanning distinct fields: computer science, philosophy, and pharmacology. Each article was selected to reflect unique disciplinary perspectives, stylistic differences, and thematic relevance to the integration of Artificial Intelligence in diverse areas of study. The first article, Schneider (2021), addresses AI’s role in language technologies, particularly its impact on language diversity and linguistic hierarchies. The second article, Coffin (2021), delves into the philosophical notion of the “machinic unconscious,” exploring how environmental and technological systems shape subconscious influences. Finally, the seminal paper by Lowry et al. (1951) was chosen for its historical significance and as the most cited paper in the Web of Science Index, with 305,148 citations. This classic paper introduced the Lowry protein assay, a foundational method in biochemical research, illustrating both the longevity and influence of scientific methodologies. Together, these articles provide an interdisciplinary lens through which to examine AI’s impact on language, thought, and scientific processes. Guided by the theoretical frameworks of Agha (2007) and Silverstein (2023, 1976), this paper investigates how AI bots enact socially recognizable identities and adapt linguistic strategies to perform specific roles. For more grounded theoretical support, Wolfram’s (2023) insights into the mechanics of neural networks for GenAI offer a technical understanding of how language models structure and replicate human communication. Agha’s concept of enregisterment and Silverstein’s notions of contextual shifters frame the analysis of how bots simulate conversational practices that align with human rituals: “The Podcast”.