From Blade Runner to Reverse AI: Are we already in the future?
- petemitchellauthor
- Apr 18
- 4 min read

One of my favourite movies from the 1980s is Blade Runner, Ridley Scott’s haunting sci-fi classic starring Harrison Ford and Sean Young. Loosely based on Philip K. Dick’s novel Do Androids Dream of Electric Sheep?, the film introduced us to a bleak future where artificial humans—called replicants—walk among us.

Ford’s character, Rick Deckard was a police bounty hunter employed to hunt down ‘replicants’. Replicants, bioengineered humanoids, primarily designed to do dangerous jobs on space colonies. The replicants became so advanced that they were almost indistinguishable from humans. The AI driving the replicants had advanced to a stage where the replicant, dangerously, thought for themselves. This leads to several replicants escaping the colonies and returning to earth, where it is Deckard’s job to hunt them down.
A major problem that Deckard faced was determining if a suspect was indeed a replicant, or a human. Hence the Voight-Kampff test was employed. In this test a series of questions are asked of the subject while they are strapped to a polygraph-like device. The test measures a suspect's responses to emotionally charged questions, that are indicative of empathy and emotional processing, which replicants are supposed to lack. But what happens where the AI advances to enable the replicants to have empathy and emotions?
Of course, Bladerunner was fiction, but as we know science fiction frequently shows us a picture of the future.

The Voight-Kampff test was fictitious, but the idea of using psychological criteria to detect machine intelligence isn’t. In the real world, we have the Turing Test. The test was developed by the brilliant mathematician, Alan Turing, whose unfortunate life was portrayed in the 2014 movie The Imitation Game. Turing is considered by many to be the father of computer science and was instrumental to the team of code breaking scientists at Bletchley Park during the Second World War.

The Turing test, first used in 1949, is a test to determine if a machine (computer) has behaviour equivalent to that of a human. In the test, a human evaluator judges a ‘conversation’ between a human and a computer. The computer ‘passes’ if the evaluator cannot tell them apart. The computer was judged to be ‘thinking’ for itself. In the original paper (1950. Turing, A. M. "Computing Machinery and Intelligence," Mind, 59, 433-460.) Turing suggested that rather than building a program to simulate the adult mind, it would be better to produce a simpler one to simulate a child's mind and then to subject it to a course of education. This is what large language models (LLM) like OpenAI’s ChatGPT does. A reversed form of the Turing test is also widely used on the Internet. The CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) test is intended to determine whether the user is a human or a computer.
A news article earlier this week (The Conversation, 13/4/25) announced that ChatGPT had passed the Turing Test. In the test Open AI’s Chat GPT 4.5 “was deemed indistinguishable from a human more than 70% of the time.” Various headlines saw this as a pivotal moment. What seems to have been missed by many journalists was to question if the 1949 devised test is still valid. I think the jury is still out on this question, but that fact that Open AI, the developer of ChatGPT is now valued at over US $300 billion is a sobering endorsement.
It is difficult to ignore the seemingly rapid advance of AI. With any new technology there are bound to be good and bad sides. What we need to embrace is optimising the good aspects and regulating the bad. That, of course is easier said than done.
In the mean-time AI is fun to play with. There is currently a craze of AI generated action figures. The so-called Barbie Box Challenge as a frivolous but trending use for AI.

But my favourite AI experiment so far came from a happy accident—and my invention of “Reverse AI Poetry". It started during a workshop where real-time speech-to-text software misheard the speakers. What came out was a surreal mashup of misunderstood phrases, which I transformed into a poem.
Misinterpreted Lines
Come down the rabbit hole where moons turn sour.
It’s a maladaptive coping strategy.
I’m not crazy for wanting a refund on the [music] system,
I find myself constantly….
We do not believe in triple timber quadrupoles.
I wouldn’t be twisting it.
Like yeah, get yourself into another state.
Everything that is, as you know -
Dealings mindful of someone else’s brain
But in a moment, it could all change again
The shifting silence of love the way it should be
It’s what it is – wherever it is
I’m not helping them...
But wait, my fun didn’t stop there. I fed the lyrics into Suno, the music AI that I introduced you to in my blog of 11 March ‘The Dying Peel: The Impact of Creative AI’ (link). I asked Suno to set the poem to music, with the added instructions to give it a punk, rock, R&B, disco and pop rock feel.
Misinterpreted Lines: The song LINK

Other than a slightly mangled pronunciation of “quadrupole,” I think the result is incredible. I wouldn’t change the station if this came on the radio (or popped into my Spotify suggestions). AI still has a long road to travel, but these playful experiments hint at just how much potential lies ahead. Let me know what you think of the song.
With the current suite of AI products we’re witnessing the first baby steps of a powerful new tool—one that can confuse, amuse, and inspire. I, for one, am excited to see where it leads next.
Kommentare