Welcome to Bookmarker!

This is a personal project by @dellsystem. I built this to help me retain information from the books I'm reading.

Source code on GitHub (MIT license).

37

Babel

Could a machine have an unconscious?

by Meghan O'Gieblyn

0
terms
2
notes

O'Gieblyn, M. (2021). Babel. n+1, 40, pp. 37-60

46

GPT-3’s most consistent limitation is “world-modeling errors.” Because it has no sensory access to the world and no programmed understanding of spatial relationships or the laws of physics, it sometimes makes mistakes no human would, like failing to correctly guess that a toaster is heavier than a pencil, or asserting that a foot has “two eyes.” Critics seize on these errors as evidence that it lacks true understanding, that its latent connections are something like shadows to a complex three-dimensional world. The models are like the prisoners in Plato’s cave, trying to approximate real-world concepts from the elusive shadow play of language.

But it’s precisely this shadow aspect (Jung’s term for the unconscious) that makes its creative output so beautifully surreal. The model exists in an ether of pure signifiers, unhampered by the logical inhibitions that lead to so much deadweight prose. In the dreamworld of its imagination, fires explode underwater, aspens turn silver, and moths are flame colored. Let the facts be submitted to a candid world, Science has no color; it has no motherland; It is citizens of the world; It has a passion for truth; it is without country and without home. To read GPT-3’s texts is to enter into a dreamworld where the semiotics of waking life are slightly askew and haunting precisely because they maintain some degree of reality. It writes Christmas carols in which Santa Claus and Parson Brown are riding together in a sleigh, defying the laws of time and space, or an article in which Joaquin Phoenix shows up to the Golden Globes in a paper bag (in real life, it was Shia LaBeouf, at the Berlin Film Festival, and the bag said “I’m not famous anymore”). Freud believed dreams were “of a composite character,” mixing different pieces of life, like a collage. Dreamwork required presenting the dream to the patient “cut up in pieces” and asking her to decode each symbol.

—p.46 by Meghan O'Gieblyn 9 months, 1 week ago

GPT-3’s most consistent limitation is “world-modeling errors.” Because it has no sensory access to the world and no programmed understanding of spatial relationships or the laws of physics, it sometimes makes mistakes no human would, like failing to correctly guess that a toaster is heavier than a pencil, or asserting that a foot has “two eyes.” Critics seize on these errors as evidence that it lacks true understanding, that its latent connections are something like shadows to a complex three-dimensional world. The models are like the prisoners in Plato’s cave, trying to approximate real-world concepts from the elusive shadow play of language.

But it’s precisely this shadow aspect (Jung’s term for the unconscious) that makes its creative output so beautifully surreal. The model exists in an ether of pure signifiers, unhampered by the logical inhibitions that lead to so much deadweight prose. In the dreamworld of its imagination, fires explode underwater, aspens turn silver, and moths are flame colored. Let the facts be submitted to a candid world, Science has no color; it has no motherland; It is citizens of the world; It has a passion for truth; it is without country and without home. To read GPT-3’s texts is to enter into a dreamworld where the semiotics of waking life are slightly askew and haunting precisely because they maintain some degree of reality. It writes Christmas carols in which Santa Claus and Parson Brown are riding together in a sleigh, defying the laws of time and space, or an article in which Joaquin Phoenix shows up to the Golden Globes in a paper bag (in real life, it was Shia LaBeouf, at the Berlin Film Festival, and the bag said “I’m not famous anymore”). Freud believed dreams were “of a composite character,” mixing different pieces of life, like a collage. Dreamwork required presenting the dream to the patient “cut up in pieces” and asking her to decode each symbol.

—p.46 by Meghan O'Gieblyn 9 months, 1 week ago
53

A lesser-known outcome of the study is that it was seized on by critics of psychoanalysis as evidence that most (human) therapists are similarly offering unthinking, mechanical responses that are mistaken for something meaningful — a complaint that lives on in the term psychobabble, coined to describe a set of repetitive verbal formalities and standardized observations that don’t require any actual thought. The charge is in many ways typical of the drift of technological criticism: any attempt to demonstrate the meaninglessness of machine intelligence inevitably ricochets into affirming the mechanical nature of human discourse and human thought.

—p.53 by Meghan O'Gieblyn 9 months, 1 week ago

A lesser-known outcome of the study is that it was seized on by critics of psychoanalysis as evidence that most (human) therapists are similarly offering unthinking, mechanical responses that are mistaken for something meaningful — a complaint that lives on in the term psychobabble, coined to describe a set of repetitive verbal formalities and standardized observations that don’t require any actual thought. The charge is in many ways typical of the drift of technological criticism: any attempt to demonstrate the meaninglessness of machine intelligence inevitably ricochets into affirming the mechanical nature of human discourse and human thought.

—p.53 by Meghan O'Gieblyn 9 months, 1 week ago