Ways of seeing Generative AI

Thoughtform of the Music of Gounod (1905)

Kandinsky insisted that in his time, the viewer had become inured to any kind of inner meaning, what he called Stimmung, in art. This deeper apprehension, he contended, was impeded by the fact that art was preoccupied with representing forms in the natural world. The art of the future, he said, would break through this barrier through the use of pure color and shape — in a word, abstraction. Richard Smoley, The Future of Thought Forms

Generative AI (GAI) is a general purpose computing tool that eats language. It blends oral culture with software programming. We can now program in natural language: we can say what we want in English, and the computer will do it.

On the first anniversary of the public release of ChatGPT, it is quite difficult to see where GAI may take us. By definition, the level of its abstraction means it operates at the root of language itself. It holds no niche, yet may seep across every niche. To help me think about this, I’m listing some favourite frames, metaphors, dimensions and questions from the past year of reading and playing with GAI in the hope they may serve as an “imagination stack” to guide a continuous “sideways glancing” at GAI.

Most conversations about AI are a hunt for the right metaphor. Ben Evans

If GAI eats language, how then do I think about it? What is their intention to know? Is it us versus them? Is it an animal that reads minds? Do I think soley in words? If I do not, then I can hide myself from their consumption. But what if I wish to expose the “embodied entities of the imaginal realm, the visible, vibrating states” of myself to this natural language API?

If we were to adopt the pessimist’s pose, these silly, fumbling questions might seem like all we have left. But taken seriously—all ideas necessitate the optimist’s stance when they are labeled serious—this wordplay can transform itself into a sort of cleromancy to guide future actions. The deeper the abstraction, the more that it’s apprehension needs wordplay, reversals, sleight-of-hands and metaphor to break down rote forms of thought. And if you can see it that way, as a placebo made of words, then any outcome can work for us to provide the next unfolding, to take advantage of all that you don’t know yet.

As such, here’s a list of lenses for glancing at GAI:

Fantasia

Machine learning doesn’t automate experts — it gives you infinite interns. That probably applies to generative models as well. Ben Evans

No, they’re not taking your job. Yet. But you can already use them to replace parts of your current job so you can do better things. But only if you have either the time and wherewithall to tinker, or the prior knowhow to know how to delegate.

The GAI we have now does not automate the work of people. Instead they work best as augmentations and accelerants for a person who already has deep expertise in some domain. They’re both deeply flawed and magnificent, and when used en-mass they provide flashes of momentum, allowing people to “get past the hard part” with regards writer’s block, code rubber-ducking, defining the first steps in a foreign knowledgework process and so on.

This means that if you have taste within some domain, you can start to think about how you might augment what you do. If everyone can use LLMs to speed up some workflow, the difference in the way you perform that workflow is going to be a matter of your taste (or what I would call “the libraries of intuition at one’s fingertips”).

Hari and Kris, Solaris

We don’t need other worlds. We need a mirror. Kris, Solaris

Unlike Kim Jong Un, I have no need of a double. Nor do I think I’m worth cloning. There are, perhaps, aspects of myself worth cloning: just as I want good advice, I can also probably give good advice to others. These parts might be worth cloning to certain people. And certainly, if we take this vein, ChatGPT already consists of millions of clones of parts of people.

GAI is in many ways a replication of our thoughts and words. The notion of cloning (genetically identical copies of people) and mirrors (reflections of light) bring into focus deep-seated assumptions of human identity and the perception of the Other.

I recommend watching Solaris as your entry point to thinking about cloning intelligence and its implications (far more so than Her, which is really a sequel to Lost In Translation by the ex-husband of that film’s director). If you can be doubled, what is you? Does the clone retain that 21 grams of consciousness?

There is no Hari. She’s dead. You’re just a reproduction, a mechanical reproduction. A copy. A matrix. Kris, Solaris

The image in a mirror seems like a clone of light: a two-dimensional projection of three-dimensional objects. Not quite the real thing, mirrors symbolize the thin line between reality and illusion, questioning what is true and what is merely a reflection. Like film is a dream that mirrors us, GAI also captures the light of our past selves:

I saw all the mirrors on earth and none of them reflected me… Jorge Luis Borges, The Aleph

The observer effect involves a change in the state of what is being observed, such as a particle’s position or momentum. In contrast, a mirror simply reflects light without altering the physical state of the object being reflected. But our relationship with GAI will more than likely be akin to the observed particle: what we see of ourselves in the GAI mirror will change us, just as any phenomenon attended to can change us. Except that the quality of this mirror is not yet known and perhaps of an altogether different resolution.

No phenomenon is a real phenomenon until it is an observed phenomenon. John Wheeler

What here is substance, not just form? Arthur C. Clarke’s story Technical Error comes to mind, where a physical accident transforms a person into his mirror image. The discovery of his complement is the protagonist in Clarke’s story. In fact, the protagonist becomes the technological unveiling rather than simply witnessing it. This is the story that Clarke’s Third Law comes from:

Any sufficiently advanced technology is indistinguishable from magic.

The mirror image becomes the revelation that our normal state was always incomplete. The substance is that advanced technology is autopoietic, a trajectory that GAI seems to allude to.

And speaking of Protagonists, another film-as-allegory is Tenet where the protagonist realises he has created a “turnstile” only after he duplicates/inverts himself continuously across time. How might AI be a turnstile that inverts entropy, destroying our past? That’s an interesting question, especially when we seem to live in an eternal present reconstituted by social media culture every day.

As of my knowledge cut-off in September 2021, I am not aware of any recent landslides in Batang Kail, Malaysia ChatGPT, February 2023

Consider as well Twin Peaks 3, the inscrutable masterpiece which hinges on the refracting doppelgänger as malicious spirit. The uncanny, incarnate:

You did good. You follow human nature perfectly. Dale Cooper’s doppelgänger

Consider too the idea of Tulpas, which I learned about from Twin Peaks 3, which sure sounds programmatic:

Tulpas were conjured duplicates of individuals. The tulpas were manufactured from a seed and organic material from the template — such as hair — and they could retain memories from their templates.

GAI is functionally tulpaic—it’s sustained human intentionality (“training”) creating quasi-autonomous cognitive entities from “seeds” (data) and “organic material” (human linguistic patterns). If I so fantastically wish, I can say GAI is an applied contemplative technology.

Mirrors, doppelgangers, clones. Symmetries of metaphor. Perhaps we can say that GAI might become the left hand to our right, some better angels of our nature? Might GAI be chiral in it’s non-superposable reflection of us? Can we use word puns to arrive at an understanding of GAI as an irreducibly dual system for human handedness?

On Earth, the amino acids characteristic of life are all “left-handed” in shape, and cannot be exchanged for their right-handed doppelgänger. Meanwhile, all sugars characteristic of life on Earth are “right-handed.” The opposite hands for both amino acids and sugars exist in the universe, but they just aren’t utilized by any known biological life form Must the Molecules of Life Always be Left-Handed or Right-Handed?

This is apt because, as of the time of this post, ChatGPT has trouble rendering hands. But it probably won’t for long.

The Oracle

AI is nothing without people Lapsus Lima

We assume something that speaks to us is sapient because only humans speak. Thus we implicitly assume language is sapience. This is no longer true. Though LLMs they are perfectly capable of talking to us, they are not sapient.

My guess is that it will be natural for people to project on to GAI the status of oracle because they will unconsciously conflate language with sentience and thus the projection of sentience onto GAI. This will be a deeply-rooted mistake, yet it is as obscured to us in the same way we forget we breathe unconsciously.

Mirror, mirror on the wall, who’s the fairest of them all? The Evil Queen, Snow White

Mention of oracles, mirrors or otherwise, presumes a projection of imagination. The projection of imagination is literally fantastic—the literal meaning of phantasia means to appear from imagination. We want to be fantastic with our projections of handedness! What matters is the awareness around such thinking, especially if we ourselves do not understand how LLM “black boxes” really work. What device are you really playing with? From daemon to demon, a single letter changes everything. Thus, one must take care to placate one’s own biases before asking questions of an LLM (what is called currently called “prompting”).

ChatGPT looks intelligent because we are intelligent. We are filling in a lot more blanks than we realize—grounding everything in our bodied experience, giving it meaning. Brother Phar

It’s agree by all that “hallucination” is a problem: don’t simply believe what the LLM says, make sure you double check what it cites, and so on. But GAI doesn’t hallucinate. It has a theory of language, from which it may deduce a theory of mind (non-biologically; without a body) but it isn’t sentient. It’s we, the sentient, who hallucinate, and fantastically so. What we think GAI is says more about us than it does about GAI. They reflect us.

Any supposed oracle is a not-oracle, unless you’re willing to look sideways at it as a talisman that reflects one’s own embodiment of the “imagi­nal realm, visible and vibrating states of what remains generally invisible”. The connection need only be conceptual, speculative even, to be useful. From simple attendance to some form of one’s own thought, through to systematic divination processes, you may take any sign you wish, and it can tell you something about your next intention, if you wish. Nobody else can answer the questions you have of yourself except for yourself. But you may use devices to prompt yourself. GAI, used wisely, can be that device.

There is a process that involves radically increasing sensitization to signs. […] And that’s something you can confirm for yourself as something that’s incredibly strange and outside the logos and the known that we think of things, yet it somehow has these effects. Scott Mannion in conversation with Nick Land

Oracles only exist when we are actively engaged in processes of contemplation beyond rote thought. And even when we can access them, they don’t tell us the answers. Instead they prompt us to attend to ourselves.

I was just thinking of Solaris, which I always thought about as this story about contacting a truly alien alien. Now it’s like, well, this is a little bit of what we’re doing with virtual reality and AI. It’s like, what would happen if you could actually talk to your dreams, if you could revive people? You could have the mimicry of consciousness, the appearance of consciousness, without a consciousness. Jacob Mikanowski, Conversations with Tyler

Ana Stelline, Blade Runner 2049

Like John Lennon said, give me a tuba and I’ll get something out of it Frank Costello, The Departed

When new user behaviours emerge, which tend to underlie market shifts because they often start as “fringe secular movements the incumbents don’t understand, or don’t care about”. So, what’s happening at the margins? What scenius is doing weird things with this?

In this sense, it is helpful to think like an artist. How might a film maker, writer or poet make sense of this medium? How would they write about it in their stories? How are you going to get something out of this new tuba?

Attempted answers to any such questions are doomed without some formal process of cleromancy to instigate new perceptions. Don’t answer them directly, let them simmer, participate in disassembling rote thought by watching films or reading fiction, and try to glance sideways. What would Cy Twombly say? Can you hire Agnes Martin as your prompter-in-residence in your mind?

It is time for prompt engineers to read Impro, watch films, go see live theatre and dial up the humour setting. Films are “possible worlds” stories. Theatre improvisation are “possible worlds” alive.

It seems essential that thinking like a script-writer, or a film director, or a novelist is just as important to creating AI-first products because “right now, LLM tools, and auto agents specifically, are more a people problem than a math or AI problem”.

Wallace needs my imagination to maintain a stable product Dr. Ana Stelline, Blade Runner 2049

Cats Cradle

If LLMs use natural language, then why do we have to learn “prompt engineering”?

Before posting any prompt, it should be required that you test out just asking the question without all the “imagine you are a divine being with special powers” stuff. It is text based but hardly needs all these extra words. Steven Sinofsky

Sinofsky is right, if you know what you’re actually asking for. Have you ever tried to explain a film you love to someone who doesn’t know why you care so much? Now try to explaining what you want to a computer. What we say is often “picked up” by other people and gaps in understanding are filled in by all our micro-behaviours, confirmatory conversation and, above all, the shared context of common memories. This mistake-laden transmission process is essential to creativity: memes mutate with every communication: what one person mishears can be used as the seed of an exciting new idea.

In most cases, the meaning of a word is its use. Wittgenstein

Prompting is a good lesson in understanding how little our words contain the true meaning we intend. How we say something is just as important as what is said. A good way of extracting oneself from the hard loops of explanation is to take oneself out of the loop entirely, and pretend to be someone else. As such, theatre improvisation is a good way of thinking about the potential of prompting:

The improviser has to understand that his first skill lies in releasing his partner’s imagination. Keith Johnstone, Impro

One’s ability to extract what “handedness” one wants from the interplay with GAI devices, indeed to determine what it is one wants at all, is to set the stage for self-authoring games. Suggestion, stories and play unfold GAI interactions.

Suppose I think of a story and you guess what it is. Keith Johnstone, Impro

Josi, Annihilation (2018)

Corruptions of form, duplicates of form… Echoes… It refracts everything. Josi, Annihilation

GAI is a new computing abstraction where you can say what you want in English, and the computer will do it. But do what? Well, possibly anything. And that’s the wicked problem. Learning to use AI starts with looking at ourselves in the mirror, to see anew how we use language to enhance ourselves.

GPT-3 is as much evidence of machine intelligence as a mirror, or a radio, is evidence of machine intelligence. What GPT-3 is revealing, or rather reflecting, instead is the vastness, depth, and diversity of human intelligence. […] The real work, the work of seeing and understanding, is being done by the human looking in the mirror image. John Manoochehri, GPT-3 and the Digital Turk

The most generative way of seeing LLMs is not as a matrix or a copy or even as a seed but as a materialisation of your conceptions, taking care to choose what you wish to project onto the mirror image you see. What you think-imagine-perceive-project is what you see-hallucinate-distort.

AI is a mirror. And a mirror can be a vertical body of water. Jump in.