Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
This paper centers on the analysis of an AI-generated conversation created using Google’s GEMMA Language Model. Notebook LM, an advanced generative AI tool, allows the synthesis of ideas and generates dialogue based on multiple source texts. The conversation was crafted as a synthetic interaction between virtual entities, designed to represent key arguments and insights from three academic articles spanning distinct fields: computer science, philosophy, and pharmacology. Each article was selected to reflect unique disciplinary perspectives, stylistic differences, and thematic relevance to the integration of Artificial Intelligence in diverse areas of study. The first article, Schneider (2021), addresses AI’s role in language technologies, particularly its impact on language diversity and linguistic hierarchies. The second article, Coffin (2021), delves into the philosophical notion of the “machinic unconscious,” exploring how environmental and technological systems shape subconscious influences. Finally, the seminal paper by Lowry et al. (1951) was chosen for its historical significance and as the most cited paper in the Web of Science Index, with 305,148 citations. This classic paper introduced the Lowry protein assay, a foundational method in biochemical research, illustrating both the longevity and influence of scientific methodologies. Together, these articles provide an interdisciplinary lens through which to examine AI’s impact on language, thought, and scientific processes. Guided by the theoretical frameworks of Agha (2007) and Silverstein (2023, 1976), this paper investigates how AI bots enact socially recognizable identities and adapt linguistic strategies to perform specific roles. For more grounded theoretical support, Wolfram’s (2023) insights into the mechanics of neural networks for GenAI offer a technical understanding of how language models structure and replicate human communication. Agha’s concept of enregisterment and Silverstein’s notions of contextual shifters frame the analysis of how bots simulate conversational practices that align with human rituals: “The Podcast”.
With the increasing ubiquity of natural language processing (NLP) algorithms, interacting with “conversational artificial agents” such as speaking robots, chatbots, and personal assistants will be an everyday occurrence for most people. In a rather innocuous sense, we can perform a variety of speech acts with them, from asking a question to telling a joke, as they respond to our input just as any other agent would. However, in a stricter, philosophical sense, the question of what we are doing when we interact with these agents is less trivial, as the conversational instances are growing in complexity, interactivity, and anthropomorphic believability. For example, open domain chatbots will soon be capable of holding conversations on a virtually unlimited range of topics. Many philosophical questions are brought up in this development that this special issue aims to address. Are we engaging in a “discourse” when we “argue” with a chatbot? Do they perform speech acts akin to human agents, or do we require different explanations to understand this kind of interactivity? In what way do artificial agents “understand” language, partake in discourse and create text? Are our conversational assumptions andprojections transferable, and should we model artificial speakers along those conventions? Will their moral utterances ever have validity? This special issue of Minds and Machines invites papers discussing a range of topics on human-machine communication and the philosophical presumptions and consequences for developing, distributing, and interacting with speaking robots. We invite the submission of papers focusing on but not restricted to: - What are philosophically sound distinctions between speaking robots, unembodied chatbots, and other forms of artificial speakers? - What constitutes discourse participants, and can artificial speakers ever meet those requirements? - Can artificial speakers perform speech acts, and if yes, can they perform all speech acts humans can perform? Or do robots perform unique speech acts? - What kind of artificial agent can be capable of what kind of language or discourse performance: chatbots, robots, virtual agents,…? - What is the role of anthropomorphism in modelling chatbots as possible discourse participants? - What is the role of technomorphism in modelling human interlocutors as technical discourse participants? - What are the normative consequences of moral statements made by artificial discourse participants? - How will communicative habits between humans change by the presence of artificial speakers? - How can semantic theories explain the meaning-creation of artificial speakers? - Are normative conventions in human-human communication (politeness, compliments) relevant and transformable/transferrable to human-machine communication? - Are there – analogous to human-human communication – any communicative presuppositions in human-machine communication? To submit a paper for this special issue, authors should go to the journal’s Editorial Manager https://www.editorialmanager.com/mind/default.aspx Deadline to submit full paper: October 1st, 2020 First round of reviews: October 2nd – December 1st, 2020 Deadline to resubmit paper: December 15th, 2020 Second round of reviews: December 15th – December 31st, 2020 Deadline for final paper: December 31st, 2020 Publication of special edition: March 2021
Arxiv preprint arXiv:1111.6843, 2011
Barring swarm robotics, a substantial share of current machinehuman and machine-machine learning and interaction mechanisms are being developed and fed by results of agent-based computer simulations, gametheoretic models, or robotic experiments based on a dyadic communication pattern. Yet, in real life, humans no less frequently communicate in groups, and gain knowledge and take decisions basing on information cumulatively gleaned from more than one single source. These properties should be taken into consideration in the design of autonomous artificial cognitive systems construed to interact with//learn from more than one contact or 'neighbor'. To this end, significant practical import can be gleaned from research applying strict science methodology to phenomena humanistic and social, e.g. to discovery of realistic creativity potential spans, or the 'exposure thresholds' after which new information could be accepted by a c ognitive agent. Such rigorous data-driven research offers the chance of not only approximating to descriptive adequacy, but also moving beyond explanatory adequacy to approaching principled explanation. Whether in order to mimic them, or to 'enhance' them, parameters gleaned from complexity science approaches to humans' social and humanistic behavior should subsequently be incorporated as points of reference in the field of robotics and human-machine interaction.
Chats Between Bots: A Real-World Experiment in Writing, Recursion, and Reflexivity, 2024
The abstract and keywords below were produced by a publicly available chatbot (setting aside this first, italicized portion). The paper was inspired by a series of unsolicited emails to a professor from prospective PhD students. Suspecting some of the emails were chatbot-generated, he decided to undertake further investigation. This paper presents a reflexive exploration of large language models (LLMs) by engaging a chatbot in a recursive dialogue on AI's social, ethical, and political dimensions. The study probes how LLMs, such as chatbots, mediate academic and public discourse, emphasizing the potential shifts in knowledge production, authorship, and authority. By experimenting with chatbot-generated text, the author assesses the recursive structures and potential biases that arise when AI participates in human communication systems. The findings reflect on the ambiguous boundaries of authorship and intellectual integrity in an age of AIassisted writing, raising questions about societal implications and the efficacy of governing such technology. This work underscores the complex dynamics of AI-mediated communication, proposing that more nuanced forms of oversight and ethical consideration are essential as LLMs continue to scale within knowledge systems.
Learning, Media and Technology, 2024
Large language models are rapidly being rolled out into high-stakes fields like healthcare, law, and education. However, understanding of their design considerations, operational logics, and implicit biases remains limited. How might these black boxes be understood and unpacked? In this article, we lay out an accessible but critical framework for inquiry, a pedagogical tool with four dimensions. Tell me your story investigates the design and values of the AI model. Tell me my story explores the model's affective warmth and its psychological impacts. Tell me our story probes the model's particular understanding of the world based on past statistics and pattern-matching. Tell me 'their' story compares the model's knowledge on dominant (e.g. Western) versus 'peripheral' (e.g. Indigenous) cultures, events, and issues. Each mode includes sample prompts and key issues to raise. The framework aims to enhance the public's critical thinking and technical literacy around generative AI models.
Philosophy & Technology, 2023
Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values.
Drawing from the resources of psychoanalysis and critical media studies, in this paper we develop an analysis of Large Language Models (LLMs) as 'automated subjects'. We argue the intentional fictional projection of subjectivity onto LLMs can yield an alternate frame through which AI behaviour, including its productions of bias and harm, can be analysed. First, we introduce language models, discuss their significance and risks, and outline our case for interpreting model design and outputs with support from psychoanalytic concepts. We trace a brief history of language models, culminating with the releases, in 2022, of systems that realise 'state-of-the-art' natural language processing performance. We engage with one such system, OpenAI's InstructGPT, as a case study, detailing the layers of its construction and conducting exploratory and semi-structured interviews with chatbots. These interviews probe the model's moral imperatives to be 'helpful', 'truthful' and 'harmless' by design. The model acts, we argue, as the condensation of often competing social desires, articulated through the internet and harvested into training data, which must then be regulated and repressed. This foundational structure can however be redirected via prompting, so that the model comes to identify with, and transfer, its commitments to the immediate human subject before it. In turn, these automated productions of language can lead to the human subject projecting agency upon the model, effecting occasionally further forms of countertransference. We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems. Once the structure of language has been recognized in the unconscious, what sort of subject can we conceive for it? (Lacan, 2007
Technology and Language, 2022
Six commentaries on the paper "You, robot: on the linguistic construction of artificial others" articulate different points of view on the significance of linguistic interactions with robots. The author of the paper responds to each of these commentaries by highlighting salient differences. One of these regards the dangerously indeterminate notion of "quasi-other" and whether it should be maintained. Accordingly, the critical study of the linguistic aspects of human-robot relations implies a critical study of society and culture. Another salient difference concerns the question of deception and whether there is a distinction between real and perceived affordances. The prospect of AI systems creating language or co-authoring texts raises the question of the hermeneutic responsibility of humans. And regarding the missing dimension of temporality, studies of macro-and micro-level hermeneutic change become more important.
Political Research Quarterly, 2024
The language argument is a classic argument for human distinctiveness that, for millenia, has been used to distinguish humans from non-human animals. Generative language models (GLMs) pose a challenge to traditional language-based models of human distinctiveness precisely because they can communicate and respond in a manner resembling humanity's linguistic capabilities. This article asks: have GLMs acquired natural language? Employing Gadamer's theory of language, I argue that they have not. While GLMs can reliably generate linguistic content that can be interpreted as "texts," they lack the linguistically mediated reality that language provides. Missing from these models are four key features of a linguistic construction of reality: groundedness to the world, understanding, community, and tradition. I conclude with skepticism that GLMs can ever achieve natural language because they lack these characteristics in their linguistic development.
Journal of Cinema and Media Studies, 2022
Looking back on the history of chatbot development, one Microsoft development team observed in 2018 that "with vastly more people being digitally connected, it is not surprising that social chatbots have been developed as an alternative means for engagement." 1 What sort of "alternative" is presented when humans engage with chatbots? If the Fourth Industrial Revolution depends not only on the flow of goods and services but also on the flow of signals of assent (purchases, likes, shares), then the economy of conversation between users must be made seamless at any cost. 2 Is the chatbot an alternative to the otherness of human beings? Are chatbots a patch for alterity? Alongside the psychologically meaningful dimensions attending the problem of our incommensurability with one another-our personhood-the disconcerting, unmanageable, merciful, and threatening separation between human beings presents a newly focalized economic problem in the digital age.
Minds and Machines
The often investigated future of human-machine relationships, ranging from cooperation partners at work, to social bots in care homes and personal assistants at home, is based on an often implicit technological requirement of robotics and artificial intelligence (AI): the ability of those machines to communicate with us in a form familiar and comfortable to us. Thus, those machines will have to learn how to communicate, either through intuitively understandable signs, text, or audio. This issue deals mostly with the latter. We may assume that machines 'speak', or rather, have to become speakers. However, this simple statement is laden with philosophical notions both about the conditions of what it means to 'speak', and thus become a speaker, and whether machines will ever be able to achieve these conditions. One may argue that the key challenge is to teach machines to use language according to its rules. However, what distinguishes 'natural' speakers from artificial ones? There is more to human language use than mere linguistic rule following: What do humans do additionally to follow the rules of language-and the requirements of communication in general-that machines are not currently and may not be able to? Could machines perform speech acts? If yes, which ones can be performed without the presence of underlying conditions such as human intentionality? Should machines count as agents, or should we reconstruct their actions as "quasiaction"? What could be functional equivalents to speech, and where exactly do they
International Journal of Learning, Teaching and Educational Research
This paper reports on the comparison of the accuracy and quality of the responses produced by the three artificial intelligence (AI) chatbots, ChatGPT, YouChat, and Chatsonic, based on the prompts (use cases) related to selected areas of applied English language studies (AELS). An exploratory research design was employed and we utilised purposive sampling. The three aforementioned AI chatbots were used to collect data sets. Of the three chatbots, YouChat was technically unstable and unreliable, and had some inconsistency in generating responses. The other two chatbots, ChatGPT and Chatsonic, consistently exhibited a tendency to plagiarise responses from internet information without acknowledging the sources. In certain cases, the three chatbots all generated almost similar responses for different and unrelated prompts. This made their responses look like run-of-the-mill responses that lacked credibility, accuracy, and quality. One chatbot (ChatGPT) could not recognise a scholar ment...
Signo, 2022
I will examine the background of the language phylogeny in emerging Homo Sapiens as a fast, bipedal, long-distance runner in Black Africa, followed by language psychogenesis in children from their gestation twenty-fourth week onward. I will concentrate on the audio-visual machines' impact, Lacan’s mirror stage, AV machines, the discontinuity between real and virtual realities, the remote control, and AI machines as smart speakers and smart homes. In addition, I will discuss the following questions: Is the Machine beyond human intelligence? Is the human individual beyond Homo Sapiens? Is the human community beyond social contract? My working hypotheses on education within phylogenetic psycholinguistics are built on the following topics: Tomorrow’s AI class (unit and room); guided self-learning and who is the guide; can transference and countertransference take place in AI-guided self-learning? Can a human subject develop such transference/countertransference with a machine? Can a machine “play the game”? In conclusion, I will debate “The utopian vision of an improved human being versus the dystopic vision of human beings and human communities totally enslaved to AI machines”.
International Journal of Information Management, 2023
Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts.
Scientific Report, 2021
Despite recent developments in integrating autonomous and human-like robots into many aspects of everyday life, social interactions with robots are still a challenge. Here, we focus on a central tool for social interaction: verbal communication. We assess the extent to which humans co-represent (simulate and predict) a robot's verbal actions. During a joint picture naming task, participants took turns in naming objects together with a social robot (Pepper, Softbank Robotics). Previous findings using this task with human partners revealed internal simulations on behalf of the partner down to the level of selecting words from the mental lexicon, reflected in partner-elicited inhibitory effects on subsequent naming. Here, with the robot, the partner-elicited inhibitory effects were not observed. Instead, naming was facilitated, as revealed by faster naming of word categories co-named with the robot. This facilitation suggests that robots, unlike humans, are not simulated down to the level of lexical selection. Instead, a robot's speaking appears to be simulated at the initial level of language production where the meaning of the verbal message is generated, resulting in facilitated language production due to conceptual priming. We conclude that robots facilitate core conceptualization processes when humans transform thoughts to language during speaking. Recent developments in artificial intelligence have introduced autonomous and human-like robots into numerous aspects of everyday life. Natural social interactions with robots are however still far from expectations, emphasizing the need to advance human-robot social interaction as one of the currently most pressing challenges of the field of robotics 1. In this study we focus on the increasingly more prevalent domain of interaction with robots: verbal communication 2,3. We assess the extent to which a social robot's verbal actions, in social interaction with humans, are simulated and predicted, or in other words co-represented, and explore the consequences of robot verbal co-representation on human language production. We focus on a social humanoid robot (Pepper, Softbank Robotics). Social robots, as physical agents, in contrast to other robots (e.g. service robots), have been developed specifically for interaction with humans 4 .
The paper aims to present a novel methodology for emulating the intricacies of human cognitive complexity by ingeniously integrating large language models with autonomous agents. Grounded in the theoretical framework of the modular mind theory-originally espoused by Fodor and later refined by scholars such as Joanna Bryson-the study seeks to venture into the untapped potential of large language models and autonomous agents in mirroring human cognition. Recent advancements in artificial intelligence, exemplified by the inception of autonomous agents like Age in GPT, auto GPT, and baby AGI, underscore the transformative capacities of these technologies in diverse applications. Moreover, empirical studies have substantiated that persona-driven autonomous agents manifest enhanced efficacy and nuanced performance, mimicking the intricate dynamics of human interactions. The paper postulates a theoretical framework incorporating persona-driven modules that emulate psychological functions integral to general cognitive processes. This framework advocates for the deployment of a plurality of autonomous agents, each informed by specific large language models, to act as surrogates for different cognitive functionalities. Neurological evidence is invoked to bolster the theoretical architecture, delineating how autonomous agents can serve as efficacious proxies for modular cognitive centers within the human brain. Given this foundation, a theory of mind predicated upon modular constructs offers a fertile landscape for further empirical investigations and technological innovations.
Connection Science, 2008
Many have compared real robots with stars like HAL9000 and R2D2. Engineers and others who design such machines like to be reminded of movie heroes.As a result, while science fiction affects robotics, cognitive science also comes under the influence of the world of films. Below, this view is supplemented by taking the perspective of a person watcher. What is learned from observing real robots? What does this imply for both folk views of language and those of trained linguists?
Philippine journal of Otolaryngology Head and Neck Surgery (On-line), 2023
Journal of English for Research Publication Purposes
In our previous editorial we discussed two significant interrelated exigencies in the field of English for Research Publication Purposes (ERPP): the role of technology in the dynamics and developments of the processes and practices of knowledge construction and dissemination, and the pedagogy of ERPP as an under-researched and under-represented domain. An issue that is attracting increasing attention in 2023 is the key role that Artificial Intelligence (AI) can play / is playing in changing the landscape and dynamics of scholarly work, including academic publication. The appearance of technologies such as ChatGPT as an open AI technology in late 2022 is a good example in that respect. The emergence of such technologies raises this important question: Is AI the new normalcy in our academic life and will it revolutionize the way we interact, create, and circulate knowledge? Certainly, we are facing issues regarding the philosophy, integrity, and ethics of knowledge production and dissemination and new imaginations in ERPP in particular. As the growing discussions both online and in-person show, both academia and the general public are marvelled by the affordances and capabilities of emerging AI technologies such as ChatGPT, Google's Bard and Microsoft's Sydney. However, what is, still controversial and debatable is the capacities of such technologies for producing human-like discourse, thought, and learning and how, and to what extent, such technologies can impact the dynamics of knowledge production and exchange. Some scholars such as Noam Chomsky prefer to be on the cautious side and are hesitant as to whether mechanical minds can be on a par with or improve on human brains. Although Chomsky and colleagues consider such technologies as a step forward, they warn against their "false promise" claiming that ChatGPT "exhibits something like the banality of evil: plagiarism and apathy and obviation" (Chomsky et al., 2023, para. 17).
Forum for Linguistic Studies, 2025
This study explores the cognitive and philosophical implications of Large Language Models (LLMs), focusing on their ability to generate meaning without embodiment. Grounded in the coherence-based semantics framework, the research challenges traditional views that emphasize the necessity of embodied cognition for meaningful language comprehension. Through a theoretical and comparative analysis, this paper examines the limitations of embodied cognition paradigms, such as the symbol grounding problem and critiques like Searle's Chinese Room, and evaluates the practical capabilities of LLMs. The methodology integrates philosophical inquiry with empirical evidence, including case studies on LLM performance in tasks such as medical licensing exams, multilingual communication, and policymaking. Key findings suggest that LLMs simulate meaning-making processes by leveraging statistical patterns and relational coherence within language, demonstrating a form of operational understanding that rivals some aspects of human cognition. Ethical concerns, such as biases in training data and societal implications of LLM applications, are also analyzed, with recommendations for improving fairness and transparency. By reframing LLMs as disembodied yet effective cognitive systems, this study contributes to ongoing debates in artificial intelligence and cognitive science. It highlights their potential to complement human cognition in education, policymaking, and other fields while advocating for responsible deployment to mitigate ethical risks.
Human-Computer Interaction. Interaction Modalities and Techniques, 2013
The aim of this research is to generate measurable evaluation criteria acceptable to chatbot users. Results of two studies are summarised. In the first, fourteen participants were asked to do a critical incident analysis of their transcriptions with an ELIZA-type chatbot. Results were content analysed, and yielded seven overall themes. In the second, these themes were made into statements of an attitude-like nature, and 20 participants chatted with five winning entrants in the 2011 Chatterbox Challenge and five which failed to place. Latent variable analysis reduced the themes to four, resulting in four subscales with strong reliability which discriminated well between the two categories of chatbots. Content analysis of freeform comments led to a proposal of four dimensions along which people judge the naturalness of a conversation with chatbots.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.