Y’know how at the top I talked about monopolies, and then about AI? I’m going to tie them together right here.
To an LLM, the only thought possible is the prediction of the next word (the completion) in response to a stimulus (the prompt.) That is, you can prompt ChatGPT with a question or sentence, and ChatGPT responds with a word that is the most likely to show up next. And then the next word. And then the next.
It makes these predictions based on all the language it has learned from so far, with the static relationships between each token that makes up that language. That’s it.
So the LLM’s world exists within itself, and it is fully described by that world, based on the genericization of all the information that went into it.
That’s fundamentally different than the world that exists for a human. My world exists not only within my own understanding, also in my interactions with the world. I don’t have a training_on=FALSE mode like the AI; I don’t get to choose whether I learn from this world. And even better, I’m sharing this world with billions of other entities. Anything I do, whether it’s writing a new book or grocery shopping or making software, is with other people and their experiences.
Setting aside the jokes about the difficulties of group projects, we all benefit by doing these projects among other entities, that have their own imperfect-but-usually-grounded-in-truth datasets and learning. Having diverse viewpoints mean that we come up with ideas that we wouldn’t have come up with individually.
This makes us more resilient as a species, because we don’t create single points of failure. The more we rely on monopolies for services, the more brittle are the systems that rely on those services (see Crowdstrike, for example). With AI, we’re relying on monopolistic thought: a very few “foundational models” that have homogenized an unknown number of sources, with undocumented biases, into individual entities that spit out genericized content.
AIs can only emulate thought in a monopolistic way, as a result of recursive homogenization of inputs and training. I prefer to work with people rather than AI, because then we get the diversity of their experiences, ungenericized, ready for novel creations.
Or, as the great Octavia Butler wrote in Parable of the Sower (the novel is set, fictionally speaking, starting on July 20, 2024):
Embrace diversity.
Unite—
Or be divided,
robbed,
ruled,
killed
By those who see you as prey.
Embrace diversity
Or be destroyed.