Making Error 404: Humanity Not Found

Nov 17 / F. Bavinton

A couple of months ago, I sat on my couch, feet up, laptop warming my legs, and made a film. When I say I made a film, I don’t mean I was painstakingly editing pre-shot footage or sending passive-aggressive texts to an editor about deadlines. I mean I sat down with an idea and, using AI, did the whole thing from script to screen on my laptop. Slippers very much on.

I used a pipeline of AI tools to make it happen. The script? Generated with ChatGPT. The visuals? Done with MidJourney to establish the look and feel, then brought to life using Runway ML for the actual shots. The characters? All voiced by yours truly, with some digital wizardry courtesy of Altered AI to morph my voice into distinct personas. And the music? Crafted using AIVA, which, thankfully, doesn’t need to hear you sing.

The result? Error 404: Humanity Not Found. A relatively simple short film, sure, but one that surprised even me. Three things stood out. First, the sheer speed of the process. Second, the quality of the output. And third, the complexity of the shots — particularly the VFX. There’s a cat in the film, and frankly, I’ve never seen a computer-generated feline this realistic. Not in big-budget movies, not on a streaming platform, not anywhere. My own fur-and-blood feline, who was curled up next to me during production, actually reacted to the intruder alarm sound in the film in exactly the same way as the AI cat. I’ve never felt such a surreal blend of pride, wonder, and existential dread.

An AI generated Maine Coon cat asleep in front of a computer screen
The cat

How to Make a Film on Your Couch

The process was simple enough in theory: think of something you want, tell an AI to make it, tweak the results, and repeat until you have a film. In practice, it was more like wrangling an army of toddler savants.

1. Writing the Script

I started with a rough concept — a story about the AI singularity. I fed this into ChatGPT and then went through a long iterative process to develop plot, characters, ideas for the look and feel, and finally, a script. Was it perfect? No. Was it workable? Yep.

2. Creating the Story World

Once the script was in place, I needed to create and see the story world. This is where MidJourney came in. Think of it as a concept artist who never sleeps. The first task was generating a look and feel (mood boards). I iterated around a few ideas until I found a look I was happy with.

Wide shot of a Cybperunk city scene with a silhouetted charcter in the forground
Cyberpunk story world

Next came the characters, particularly the protagonist, Cypher. Consistency across shots is one of the big challenges with AI. To solve this, I used my 3D model of myself, Fin, as a reference for MidJourney. This ensured the character looked the same in every shot. The AI, being a reflection of Fin, was straightforward to design. The cat, on the other hand, required a lot of prompting and refining. But the effort was worth it.

Close up headshot of Cypher played by Fin
Cypher played by Fin

3. Creating the Shots

Still images don’t make a film. For that, I turned to Runway ML, a tool that can generate cinematic shots based on your prompts. The key here was storyboarding. To get consistent results, I used my images not just as style references but as the first and last frames of each shot. This is where all those years of filmmaking came in handy — framing, lenses, lighting, movement — all carefully planned and fed into the prompts.

The cool thing? Once I got the prompts right, almost anything became possible. Need a humanoid figure wandering through a surreal digital landscape? Done. Characters talking? Done. A cat reacting to an alarm? Done. The caveat? Working with a toddler savant means hallucinations (hands with six fingers, people with three hands, cats with two heads…) are inevitable. It took 20–30 generations per shot on average to get usable results.

4. Voices and Dialogue

Recording the dialogue was oddly intimate. I voiced every character myself, sitting there with my mic like an amateur voice actor. Then I used Altered AI to morph my voice into distinct personas. It was unsettling to hear my voice transformed into something completely different, but it worked. No expensive actors, no re-recording sessions. Just me and some clever software.

Having said that, if this were a bigger project, I’d definitely want the services of professional voice actors. There’s only so much you can do, even with AI, when the starting material is your own voice.

5. The Soundtrack

Finally, I needed music to tie it all together. Enter AIVA, an AI music generator. I told it the tone I wanted — dark, atmospheric, a little unnerving — and it was awful. What AIVA can do, however, is generate a MIDI file. This allowed me to select a number of snippets from what it had generated that I thought could work well together and import them into my digital audio workstation (DAW) and do my own arrangement.

6. Putting It All Together

That was all me. You can watch it below.

Democratization: A Loaded Word

You’ll hear a lot of breathless talk about how AI “democratizes” filmmaking (and everything else). And, to an extent, that’s true. It lowers the financial and technical barriers, making tools that were once the preserve of Hollywood accessible to anyone with a laptop and an internet connection. But let’s not kid ourselves into thinking this is some egalitarian revolution.

Making Error 404 wasn’t easy just because I had access to the tools. It worked because I knew what to do with them. Prompt engineering — getting AI to give you the results you want — is a skill. Storytelling is a skill. Knowing what looks good on screen is a skill. AI doesn’t replace any of that. It just moves the goalposts.

And then there’s the money. Sure, I didn’t need a camera crew or a VFX studio, but I did need subscriptions — to ChatGPT, MidJourney, Runway ML, Altered AI, and AIVA. So yes, AI removes some barriers, but it’s hardly free. Instead of paying for gear, you’re paying the tech giants who control these tools. “Democratization” in this context feels more like a rebranding of gatekeeping than a genuine shift in power.

The Ethics of AI Creativity

Let’s not gloss over the ethical issues here. Generative AI tools like Stable Diffusion and Runway ML are built on datasets scraped from the internet — datasets that include copyrighted material. Stability.AI, the company behind Stable Diffusion, is currently fighting lawsuits over this very issue. So, when I used these tools to create Error 404, I had to confront an uncomfortable question: whose work am I building on?

This is the paradox of AI creativity. It relies on the collective output of humanity while threatening to displace the very people who create that output. AI needs us — our stories, our art, our ingenuity. Without humanity, AI would have nothing to learn from.

James Cameron seems to agree. The filmmaker, who once warned us about rogue AI in The Terminator, has now joined Stability.AI, calling the intersection of AI and CGI “the next wave” in filmmaking (Lees, 2024). His endorsement underscores the reality that this technology is both groundbreaking and deeply reliant on human ingenuity. But even Cameron’s involvement raises a question: what happens when the tools outpace the ethical frameworks that govern them? If the director who gave us Skynet thinks the technology is worth the risk, it’s worth paying attention.

Which brings us back to the film’s title: Error 404: Humanity Not Found. If we’re not careful, it might not just be a clever title.

References
Lees, D. (2024) AI could transform film visual effects. But first, the technology needs to address copyright debate. Available at: https://theconversation.com/ai-could-transform-film-visual-effects-but-first-the-technology-needs-to-address-copyright-debate-240348.

Created with