Published: 2024-11-07
Abstract
It is undeniable that conversational agents took the world by storm. Chatbots such as ChatGPT (Generative Pre Trained) are used for translations, financial advice, and even as therapists, by millions of users every month. When interacting with technology it’s important to be careful, especially if we do so by using natural language, since our relationship with artificial agents is shaped by the technology’s features and the manufacturer’s goal. The paper, organized into three sections, explores the question of whether ChatGPT’s production can be described as ‘bullshit’. In the first section, the focus is on ChatGPT’s architecture and development; in the second a new formulation of the concept of Frankfurt’s ‘bullshit’ is presented, in which its central features of indifference, deception and manipulation are highlighted; in the last section, the title question is tackled, proposing an affirmative answer to it, arguing that ChatGPT can be considered a ‘bullshit’ generator.
I would loosely define “Bullshit” as something composed to sound plausible, without any relation to or concern with the truth.
By that loose definition, ChatGPT is nothing BUT Bullshit. Putting together statements purely because they sound right, with exactly zero concern for what is actually true.