The corpus bride
I got my beta invitation to DALL-E 2, which creates art based on text prompts. You’ve probably seen them floating around the internet by now: surrealist, AI-drawn illustrations in a variety of styles.
I can’t claim to fully understand its algorithm, but DALL-E is ultimately based on a huge corpus of information: OpenAI created a variation of GPT-3 that follows human-language instructions well enough to sift through collected data and create new works based on what it’s learned. OpenAI claims to have guarded against hateful or infringing use cases, but it can never be perfect at this, and will only ever be as sensitive to these issues as the team that builds it.
These images are attention-grabbing, but the technology has lots of different applications. Some are benign: the team found that AI-generated critiques helped human writers find flaws in their work, for example. GitHub uses OpenAI’s libraries to help engineers write code, using a feature called Copilot. There’s a Figma plugin that will mock up a website based on a text description. But it’s obvious that there are military and intelligence applications for this technology, too.
If I was a science fiction writer — and at night, I am! — I would ask myself what I could create if the corpus was everything. If an AI algorithm was fed with every decision made by every person in the world — our movements via our cellphones, our intentions via our searches, our actions via our purchases and interactions — what might it be able to say about us? Could it predict what we would do next? Could it determine how to influence us to take certain actions?
Yes — but “yes” wouldn’t make for a particularly compelling story in itself. Instead, I’d want to drill a level deeper and remind myself that any technology is a reflection of the people who built it. So even if all those datapoints were loaded into the…