ZUR INVASION AKADEMISCHER ELFENBEINTÜRME 
...

Anna's AI Anthology

How to live with smart machines?

This anthology is meant to make the authors and the readers think and talk - a book made by humans for humans.
As a special feature there will be a graphic novel by Anna & Moritz Strasser.

Tentative timeline:
Call for Paper: 15.8.2023 | Deadline for abstracts: 15.09.2023 | Acceptance: 15.10.2023 | Deadline full paper: 24.12.2023 | Intended publication date: January/February 2024

Editor: Anna Strasser, DenkWerkstatt Berlin
Publisher: xenomoi Verlag, Berlin

CALL FOR PAPERS (deadline was 15 August)
This anthology is inspired by the hybrid workshop 'Human and Smart Machines as Partners in Thought?' organized by Anna Strasser & Eric Schwitzgebel in May 2023 at UC Riverside.
Everyone who wants to participate in the CFP is encouraged to watch the videos of the workshop!
LINK
(https://youtube.com/playlist?list=PL-ytDJty9ymIBGQ7z5iTZjNqbfXjFXI0Q&si=noZ7bPGz-uMt6jmm)

    Please send your abstract with max. 1000 words as Pdf- or Word document plus a short biographical note to: berlinerdenkwerkstatt@gmail.com
    Please use the following subject when submitting: Submission for CFP Anna's AI anthology
    Large language models (LLMs) like ChatGPT or other AI systems have been the subject of widespread discussion. This book project aims to deliver a comprehensive collection of philosophical analyses of our interactions with AI systems, their capacities, and their impact on our society.
    To address these issues, the following questions might serve as an inspiration:

(1) How should we describe interactions with AI systems?
• tool use | "social kinds" | conversational partners | intermediate category
(2) What capacities can we ascribe to AI systems?
• agency (linguistic actions in conversations | producing text | making decisions)
• rationality (instrumental/reflective | mental representations | thinking | comprehension)
• consciousness – desires – sentience
(3) What impact will AI systems have on our society?
• challenging traditional distinctions between thinking partners and tools that merely produce strings of words
• emotional and intuitive pull to treat AI systems as sentient
• regulations – responsibility – control

INVITED & CONFIRMED AUTHORS:

Daniel Dennett
We are all Cherry-Pickers

Ophelia Deroy
Ghosts in the machine

Keith Frankish
What are large language models doing?

Eric Schwitzgebel with Anna Strasser
Quasi-Sociality: Toward Asymmetric Joint Actions

Paula Droege
Full of sound and fury, signifying nothing

Joshua Rust
Minimal agency in living and artificial systems

Sven Nyholm

Henry Shevlin

Michael Wilby


PRELIMINARY TITLES & ABSTRACTS
Daniel Dennett (Tufts University): We are all Cherry-Pickers
Large Language Models are strangely competent without comprehending. This means they provide indirect support for the idea that comprehension, (“REAL” comprehension) can be achieved by exploiting the uncomprehending competence of more mindless entities. After all, the various elements and structures of our brains don’t understand what they are doing or why, and yet their aggregate achievement is our genuine but imperfect comprehension. The key to comprehension is finding the self-monitoring tricks that cherry-pick amongst the ever-more-refined candidates for comprehension generated in our brains.

Eric Schwitzgebel (UC Riverside) & Anna Strasser: Quasi-Sociality: Toward Asymmetric Joint Actions in AI Systems
What are we doing when we interact with LLMs? Are we playing with an interesting tool? Do we enjoy a strange way of talking to ourselves? Or do we, in any sense, act jointly when chatting with machines? Exploring conceptual frameworks that can characterize in-between phenomena that are neither a clear case of mere tool use nor fulfill all the conditions we tend to require for proper social interactions, we will engage in the controversy about the classification of interactions with LLMs. We will discuss the pros and cons of ascribing some form of agency to LLMs so they can at least participate in asymmetric joint actions.

Henry Shevlin (University of Cambridge): LLMs, Social AI, and folk attributions of consciousness
In the last five years, Large Language Models have transformed the capacities of conversational AI agents: it is now entirely possible to have lengthy, complex, and meaningful conversations with LLMs without being confronted with obvious non-sequiturs or failures of understanding. As these models are fine-tuned for social purposes and tweaked to maximise user engagement, it is very likely that many users will follow Blake Lemoine in attributing consciousness to these systems. How should the scientific and philosophical consciousness community respond to this development? I suggest that there is still too much uncertainty, cross-purpose, and confusion in debates around consciousness to settle these questions definitively. At best, experts may be able to offer heuristics and advice on which AI systems are better or worse consciousness candidates. However, it is questionable whether even this limited advice will shift public sentiment on AI consciousness given what I expect to be the overwhelming emotional and intuitive pull in favour of treating AI systems as sentient. In light of this, I suggest that there is value in reflecting on the broader division of labour between consciousness experts and the broader public; given the moral implications of consciousness, what kind of ownership can scholars exert over the concept if their opinions increasingly diverge from folk perspectives?

Keith Frankish (University of Sheffield): What are large language models doing?
Do large language models perform intentional actions? Do they have reasons for producing the replies they do? If we adopt an ‘interpretivist’ perspective, which identifies reason possession with predictability from the intentional stance, then there is a prima facie case for saying that they do. Ascribing beliefs to an LLM gives us considerable predictive power. Yet at the same time, it is implausible to think that LLMs possess communicative intentions and perform speech acts. This presents a problem for interpretivism. The solution, I argue, is to think of LLMs as making moves in a narrowly defined language game (the 'chat game') and to interpret their replies as motivated solely by a desire to play this game. I set out this view, make comparisons with aspects of human cognition, and consider some of the risks involved in creating machines that play games like this.

Paula Droege (Pennsylvania State University): Full of sound and fury, signifying nothing
Meaning, language, and consciousness are often taken to be inextricably linked. On this Fregean view, meaning appears before the conscious mind, and when grasped forms the content of linguistic expression. The consumer semantics proposed by Millikan breaks every link in this chain of ideas. Meaning results from a co-variation relation between a representation and what it represents, because that relation has been sufficiently successful. Consciousness is not required for meaning. More surprising, meaning is not required for language. Linguistic devices, such as words, are tools for thought, and like any tool, they can be used in ways other than originally designed. Extrapolating from this foundation, I will argue that Large Language Models produce speech in conversation with humans,  because the resulting expression is meaningful to human interpreters. LLMs themselves have no mental representations, linguistic or otherwise, nor are they conscious. They nonetheless are joint actors in the production of language in Latour’s sense of technological mediation between goals and actions.

Joshua Rust (Stetson University): Minimal agency in living and artificial systems
Two innovations characterize the proposed account of minimal agency. First, the so-called “precedential account” articulates a conception of minimal agency that is not grounded in a capacity for instrumental rationality. Instead, I describe a variety of what Rosalind Hursthouse calls “arational action” wherein an agent acts in a certain way because it had previously so behaved under similar circumstances. Second, I consider the extent to which the precedential account applies, not just to a broad swath of living systems, including single-celled organisms, but to two categories of artificial system – social institutions and Large Language Models (LLMs).

Ophelia Deroy (LMU Munich): Ghosts in the machine - why we are and will continue to be ambivalent about AI
Large language models are one of several AI-systems that challenge traditional distinctions - here, between thinking and just producing strings of words, there, between actual partners and mere tools. The ambivalence, I argue, is there to stay, and what is more, it is not entirely irrational: We treat AI like ghosts in the machine because this is simple, useful, and because we are told to. The real question is: how do we regulate this inherent ambivalence?