I’ll start with some premises: by “language model”, I mean an instance of a trained language model. Language models aren’t people, language models aren’t conscious, language models don’t generally know or understand, language models don’t exhibit intelligence, ⛧language models are not guaranteed to be eldritch⛧, language models are programs (incl. fixed data), programs don’t have agency. Language models are of great utility. Lots of people already said a lot of things about most of these premises, and that’s OK.
When we talk about language models, we need to talk about artifacts and actions around them. Things need names. Here, we need to name the input given to the model, the output received from model, the acts of sending input to and consuming output from the model, and the cognitive process of how we interpret those outputs. We also need to refer to the model itself. People sometimes exchange a sequence of input and output with a model; we need words for referring to that exchange, too. Since the output and input tend to be language here, and we already have a word for a process of producing language and receiving related language in a sequence of turns - dialog - we can use the metaphor of a discussion.
Metaphors are important: they let us take an existing way of understanding something and use it in a new situation, instead of having to make sense of the whole thing from scratch. For example, Windows as a visual computer interface doesn’t refer to holes in houses, but the metaphor works great. Affordances of opening and closing work intuitively; the ability to see things through a rectangular frame applies just fine.It’s natural to apply metaphors instinctively when there’s one that fits, and in the case of language models, come up with verbs like “say”, “reply”, “talk”, “discussion” for this kind of interaction.
No wonder they call it ChatGPT.
Unfortunately, we run into trouble from side effects of this metaphor. Our experience of language-based discussion is that it is between conscious, comprehending humans (hopefully). When this isn’t the case, and the participant (interlocutor) is not human but a language model providing output, some aspects of humanity can be ascribed to that interaction that simply aren’t there. This is anthropomorphization. It brings in false equivalences - for example that fluent, erudite speech is more likely to be factually correct - and it can affect how likely one is to think critically about a piece of text. There’s plenty more to this discussion, but in short, we don’t really like anthropomorphization, especially given the unreliability of assertions made in contemporary language model output.
However, taking a stance against anthropomorphization doesn’t suddenly provide appropriate alternative words for interacting with language models. Emily Tucker’s article on dropping verbs of cognition when they refer to machines brought up precise, accurate terms, while also highlighting how difficult it is to make them sound natural. Personally, I like the computing analogy for language models - we input text, the text is processed, we read output text; this keeps the agency with the humans, and makes it clear that one is using an artifact. Plus, short words are good; long, awkward, new terms tend not to be widely adopted.
“That powder—Great God! it had come from the shelf of Materia—what was it doing now, and what had started it? The formula he had been chanting—the first of the pair—Dragon’s Head, ascending node—Blessed Saviour, could it be. . . .”
H.P. Lovecraft, The Case of Charles Dexter Ward,
as the protagonist utters words of power
Over recent weeks we’ve been conducting a study on how people use language models. This has involved interviewing a broad range of really interesting people from all kinds of organisations, who’ve been generous enough to give us their time to relate their experiences. We have 30-40 hours of interview data now, and I am very excited to see what comes out of the analysis and to share the paper that comes out of it all.
One recurring theme is that all kinds of people - machine learning professionals, security consultants, artists, big tech directors, programmers, hobbyists, academics - struggle to find terms to describe their activities. The metaphor of discussion comes easily enough; they will “say” things to the model, they will say the model is “replying”, or that the model might “think” a bit before responding. And because we all share this human discussion metaphor, we can understand each other fine. Things get tricky when interpreting the content of the language output from the model, and describing strategies for “conversation”. They might ascribe an intent to the model, or say how acquiescent or stubborn it might seem. Immediately after this there’s often a voluntary recognition of their anthropomorphization, and a clarification that they know the model isn’t really doing any of these things. So we all know these models aren’t conscious or human; there’s just a gap in existing vocabulary for describing the complex interactions people have with them.
And that’s OK. We’re early in the process of making sense of large language models. Interest in and interaction with them has thoroughly escaped to way beyond the realms of technical research, and so researchers no longer have control of the words used to describe these things. Indeed, many of these technical terms connect to concepts and terms that aren’t useful to others - discussing things like “model generations” or “sequences of input-output turns“ is impractical and won’t stick. Conversely, researchers don’t have precise terms for describing outputs that are (or are not) anticipated by the person to later lead to the kind of language model output they want to see (or not). So words that work for discussions with humans, like “soft” or “stubborn”, appear in order to fill that vocabulary gap. And these are really anthropomorphizing.
“It's part magical, like techno-magic almost”
So what alternative is there? Some people playing with language models have intentionally adopted the language of magic; they describe using language models with the terminology of demon summoning, spellcasting, binding and so on. At the beginning of the study, I thought that users of these terms would be a disconnected sub-community relating to new technology in their own wonderful way. So what was surprising was how regularly words from this realm would come up from other study participants in completely disconnected contexts - from research directors, security professionals, machine learning experts - who use the models in activities like due diligence, red-teaming, writing assistance, and NLP research.
This makes for a small lexicographic study of how words of magic fill that vocabulary gap. We don’t have a huge number of study subjects - just a few dozen; language model prompting is very new and not many people do it to the point where they need to discuss it in detail. That’s fine - you don’t need a big number of examples to show that a word usage exists; one is enough! In a way, we’re lucky to have enough examples to see the metaphor used at all.
Here are some of the terms we heard, with definitions and supporting quotes. A couple have come from beyond our study; these have links. I want to emphasize that these are spontaneous quotes, collected in the wild, from multiple participants, many of whom have robust ranges of technical terms to hand that they could have used instead.
magic, scrying
goal-focused interaction with prompted language models
“What do you call it? Scrying”
“It's part magical, like techno-magic almost”
“If you have that one perfect prompt and you mess it up, it could be hard to get it back. They can be so finicky and magic saucy”
“It’s very useful to keep in your head to keep yourself honest, and to not get too deep into believing your own hype about these models and its capabilities. So I think it helps to put a trivial fun layer on top of it, that it's just magic, and you're just messing around things you don't understand. And there's nothing more to it than that.”
alchemy
the practice of constructing model inputs to discover the different model outputs they lead to
“Prompting is a pseudoscience. It's kind of like alchemy, where it's not exactly a science”
“it's more like, very much more like alchemy rather than engineering”
“No analogy is going to fit perfectly. I like this one because alchemy is not real. And there's no like real ways of saying whether or not something is valid or not.“
spell, spellcasting
providing or developing an input to a prompted language model
“It’s a programming activity that actually feels a lot more like spellcasting”
“If you try casting that spell, but missing the component that works with chat”
“these ones would work as kind of spells for prompts”
incantations
small parts of model input, used to compose a prompt. they may be anything between fixed strings copied into a larger, more complex prompt, to high-level style guidelines of how to construct a prompt
“You'll never know quite why something works, you just have to keep collecting
increasingly bizarre incantations”
spirit
a genre, register, domain, style, tone perceived in output text
“Invoking the spirit of wikiHow here, but also still adding the ChatGPT spirit and language”
“It's almost like summoning magical spirits, right?”
summon, invoke
to change characteristics of model output
“The first initial prompts that set the task or describe the scene are kind of like the summoning part”
“I call it invoking GPT-3”
“That's the word I like, I'm tempted to use to kind of summon a character into the model”
“So this is invoking a poet”
demon, homunculus
the perceived state of a language model during input-output turns where the output is of a character (a) distinct from the initial style, (b) somehow interesting to the human
“It could be a demon. Who knows, right? I don't care. I give you inputs and you give valid outputs. I actually don't care how it gets that”
“Well, when you're trying to do this, you're trying to summon a demon within GPT-3 and bind it”
“It is on the level of human input; you have a little homunculus gremlin in your in your machine”
“And if you don't mind, I'm going to ask it to summon a demon”
It’s interesting to see this particular behavior emerge. It’s unsurprising that human-like entities are the ones grasped while people make sense of a new kind of interlocutor; as it was put in the Stochastic Parrots🦜 paper on what to watch out for as LMs hit society:
Even when we don’t know the person who generated the language we are interpreting, we build a partial model of who they are and what common ground we think they share with us, and use this in interpreting their words.
I just didn’t expect the “they” here to be spectral phantasms.
Demons are part of mainstream scientific tradition
Magic has advantages as a metaphor: it isn’t anthropomorphising; it’s clear to those looking for precision that the words used are not actual descriptions of what’s going on; it reflects the current level of understanding people have about how input and output connect; and it invokes a social standing of the human user that perhaps isn’t quite so exclusionary as terms like “engineer” or “machine learning scientist” are. This last one is pretty appropriate, given how much we know about how to use these models. Many of the best prompt engineers spellcrafters rely on experiential knowledge and intuitions from interacting with ChatGPT or GPT-3 since its open release, so the most experienced ones have been doing this for a little over a year, and roughly none of them have been doing this professionally the whole time. Top prompt engineers are familiar with many of the concepts here, and are often highly sensitive to qualitative aspects of model behavior as they programmatically assemble complex inputs. Promptcrafters respond intuitively to model outputs, rather than referring to a manual. In short, anyone with time available can get to the front of language model technomagic by summer if they begin doing it consistently now. Here’s a mystical tome to get you started.
The magic metaphor isn’t actually new to computing. Linux and unix-based systems (including your Mac) call the programs that run in the background providing services as “daemons”, a reference to a daemon that worked tirelessly in a physics thought experiment. Programs ending with a “d” are daemons, running constantly; the locationd
program manages location tracking, bluetoothd
looks after bluetooth, init.d
deals with starting the computer up. The metaphor even includes definition for “dragons” in this family of operating systems, which are programs that are “summoned” upon request - this includes things like crontab
, which is summoned by the scheduling daemon, crond
. 🐉
Prompted language models aren’t even the first machine learning models to be understood using this metaphor. Remember last year’s short-lived story about the latent-space ghoul named Loab? This was framed as a creepy horror character lurking in a text-to-image model’s latent space, whose image could be summoned by interacting with the model long enough (the pragmatic interaction mode is identical in acts to the summoning described by language model users). A mystical machine learning mythos already exists, blending elements of both fields, and using their metaphors and language.
Indeed, demons are part of mainstream scientific tradition. They’re used for otherwise inexplicable behaviors in complex systems. You can find demons mentioned in articles published in venues like the American Journal of Physics, Science, and Nature as recently as the 19th and 20th century (or even January 2023), with the demon often being named for the scientist who speculated about their existence. There’s a “demonology for the age of reason”, from 2020: Jimena Canales’ Bedeviled: A Shadow History of Demons in Science. In fact this isn’t even the first piece of writing on magic and language models. This is, however, an evidence-based account of how many practitioners really talk about language models.
This post is not a proposal to co-opt words for the arcane to describe language model interaction. We have serious work to do in sense-making, managing, and researching the current wave of fluent, low-perplexity language models, and it might be my own prejudices creeping through here, but this kind of magic isn’t … serious. Is it? Further, the metaphors of magic are imprecise - one can imagine this is partly because it has been necessary for magic users to be unclear and vague in order for magic to retain its mystery and power, and resist threats like sensible argumentation — or, even worse, Scientific Method.
Note that the magic metaphor doesn’t expunge verbs of cognition or sentience from being used to describe models. Quite the opposite - demons and spirits often embody actions and motives that are more intelligent, more capable, more manipulative than the humans interacting with them. There’s definitely a crowd that would argue this is apt for language models. However, these verbs are no longer assigned in an anthropomorphising way, instead in a … daimonomorphizing .. way?
This is part of why sorcery works for mitigating anthropomorphization: once someone is sure what they’re really reading, it is immediately clear we’re not talking about a human, and so not mistakenly placing human characteristics on the model. If instead of using “The model told me to do it” someone says “The demon told me to do it”, one immediately sees the absurdity in the instruction. Avoiding “ChatGPT refuses to help with this” and instead using “I could not bind ChatGPT to my will” again implies that the model is not human and that the exchange was not a normal human discussion. Dropping “Claude understands me so well” for “I was able to summon a sympathetic spirit from Anthropic” makes it much clearer which reality the speaker has a grasp on.
Of course, even if there really is magic in language models and prompt engineers are mediums between the Earth and the Latent Plane, this does not absolve those using and creating the models from responsibility for how they deploy and frame the technology. In fact, the metaphor fits well even here: a model should not immediately present a demon in the form of an unwanted, toxic tone, and if it does, fault lies with the model’s creator. On the other hand, if one relentlessly pursues a bigoted nuance in the model and that “spirit” can eventually be read from the model’s output, the risk and impact is lower, because more responsibility for this lies with the determined summoner - just as a due diligence exercise will typically assign reduced priority to model behaviors that require sustained intent to provoke. And if a person finds a mysterious incantation of unknown origin and chooses to utter it in front of a mysterious artifact without knowing the end result, as Dr. Willett did in the Lovecraft story quoted above, perhaps both the model is dangerous and the user could exercise a little more caution.
We need a non-anthropomorphised vocabulary for discussing language models. And at least one is starting to bloom, often with meanings that are surprisingly consistent across people working with these models. So maybe, for now, words about occult sorcery aren’t completely inappropriate for describing the artificial and unknown. Especially while we’re busy summoning demons.
Thanks to Igor Brigadir and Nanna Inie for feedback on this post.
This is really interesting, considering that this particular vocabulary is basically free to use since it doesn't really describe anything real. I will still cringe at most of them, though.
One remark on your premises: LLMs don't display intelligence... I don't think this is true unless you go for a definition of intelligence that has assumptions about consciousness built-in as a requirement. If they don't display intelligence what is the fuss all about? You might also be able to say they are "not really intelligent", but they certainly do "display" something like intelligence. But now we are probably caught in the meaning of "display".