What happens when there is no feed at all?


How does AI work? Psychologist David Broad tells us how to keep AI from spying on human brains and other humans, and why artificial intelligence makes it easier to teach it

Broad’s explorations of inputless output shed some light on the internal processes of AI, even if his efforts sometimes sound more like early lobotomists rooting around in the brain with an ice pick rather than the subtler explorations of, say, psychoanalysis. Broad says it’s critical to reveal how these models work so as to demystify them in a time when doomers and techno-optimists are laboring under an all powerful, quasi-mystical artificial intelligence. Broad says they think they are doing more than they are. It is only a bunch of matrix multiplications. It’s very easy to get in there and start changing things.”

Talking to him about his process, and reading through his PhD thesis, one of the takeaways is that, even at the highest academic level, people don’t really understand exactly how generative AI works. Compare generative AI tools like Midjourney, with their exclusive emphasis on “prompt engineering,” to something like Photoshop, which allows users to adjust a nearly endless number of settings and elements. We know that if we feed generative AI data, a composite of those inputs will come out the other side, but no one really knows, on a granular level, what’s happening inside the black box. (Some of this is intentional; Broad notes the irony of a company called OpenAI being highly secretive about its models and inputs.)

Broad has deep reservations about the ethics of training generative AI on other people’s work, but his main inspiration for (un)stable equilibrium wasn’t philosophical; it was a crappy job. In 2016, after searching for a job in machine learning that didn’t involve surveillance, Broad found employment at a firm that ran a network of traffic cameras in the city of Milton Keynes, with an emphasis on data privacy. Broad was training models and managing large amounts of images around the most boring city in the UK. I just got fed up with managing data. When I started my art practice, I said I was not doing it.

In 2018, Broad started a PhD in computer science at Goldsmiths, University of London. It was there, he says, that he started grappling with the full implications of his vow of data abstinence. How could you train an artificial intelligence model without copying data? It took me a long time to realize that it was an oxymoron. A generative model is just a statistical model of data that just imitates the data it’s been trained on. So I kind of had to find other ways of framing the question.” Broad soon turned his attention to the generative adversarial network, or GAN, an AI model that was then much in vogue. In a conventional GAN, two neural networks — the discriminator and the generator — combine to train each other. The networks analyze the data and the generator tries to deceive the discriminator by generating fake data, but when it fails, the generator adjusts parameters and the discriminator adjusts. At the end of this training process, tug-of-war between discriminator and generator will, theoretically, produce an ideal equilibrium that enables this GAN to produce data that’s on par with the original training set.

When a journalist from Vox contacted Warner Bros. for comment, it quickly rescinded the notice — only to reissue it soon after. Broad says the video has been reposted several times, and always gets a notice that it has been taken down. Broad got exhibitions at the Whitney, the Barbican, and Ars Electronica after Curators began to contact him. But anxiety over the work’s murky legal status was crushing. I remember being on a plane when I went to see the show at the Whitney. Broad recalls how he was concerned that Warner Bros. would shut it down. I was really afraid of it. I never got sued by Warner Bros., but that still stuck with me. After that, I was like, I want to practice, but I don’t want to be making work that’s just derived off other people’s work without their consent, without paying them. I have not trained a model on anyone else’s data to make my art.

In the making-of video, Aronofsky points out that cutting-edge technology has always played an integral role in the filmmaking business. You would be hard-pressed today to find a modern film or series that wasn’t produced with the use of powerful digital tools that didn’t exist a few decades ago. There are other things that make Ancestra seem like a demonstration of how machines can become sophisticated enough to create film for people to watch at a theater. But the way Aronofsky goes stony-faced and responds “not good” when one of Google’s DeepMind researchers explains that Veo can only generate eight-second-long clips says a lot about where generative AI is right now and Ancestra as a creative endeavor.

With a bit of fine-tuning, Ancestra’s production team was able to combine shots of Corsa and the fake baby to create scenes in which they almost, but not quite, appear to be interacting as if both were real actors. If you look closely in wider shots, you can see that the mother’s hand seems to be hovering just above her child because the baby isn’t really there. The scene moves by so quickly, that it doesn’t immediately impress, and it’s not quite as fanciful as the film’s shots meant to represent a hole in the baby’s heart.

It’s all very sentimental, but the message being conveyed about the power of a mother’s love is cliched, particularly when it’s juxtaposed with what is essentially a montage of computer-generated nature footage. Visually Ancestra feels like a project that is trying to prove how all of the AI slop videos flooding the internet are actually something to be excited about. The film is so lacking in fascinating narrative substance, though, that it feels like a rather weak argument in favor of Hollywood’s rush to get to the slop trough while it’s hot.

Ancestra is a behind-the-scenes video and one of the biggest lessons is how tiny the production team was compared to a more traditional short film. Hiring more artists to conceptualize and then craft Ancestra’s visuals would have undoubtedly made the film more expensive and time-consuming to finish. It is difficult for up-and-coming creatives who don’t have unlimited resources to overcome those types of challenges.

The technology that is used to make films with more generative artificial intelligence doesn’t seem capable of making art that can be seen in cinemas or pushed to sign up for another streaming service. And it’s important to bear in mind that, at the end of the day, Ancestra is really just an ad meant to drum up hype for Google, which is something none of us should be rushing to do.