Everyone knows that humans and computers are good at different things. We’re probably all fairly used to that by now. Most of us have lived through that glorious “wow, computers can do anything, they’re so smart!” phase (some of us less recently than others), and the subsequent frustrating “wait no, why would you think that, computer I didn’t tell you to do that!” phase.
Its a generalization, but I feel like we know their quirks, and are kind of used to whatever level of machine ‘intelligence’ we’re living with at the minute. It’s not that weird anymore. It’s like living with a dog – if you train it, it can do a lot of things, but you still have to do things in specific ways to get it to ‘understand’. The AI/dog comparison is one I quite like actually, which I got to see explored a big more in the short film See a Dog, Hear a Dog at Transmediale.
Just like the film, however, I’m not sure I’m ready to take comfort in that comparison just yet. Why? Short answer: face-swapping.
I’m being melodramatic, obviously. These are more funny and than really horrifying. But I do have a point with all this, one which is better summed up here, on PBS Idea Channel in their video The Vague Horror of Face Swap.
Long story short – to us, faces are important. The face is a metonym for the person. Figuratively speaking the face is the person, in the sense that we use the faces of other people as a short-hand way of understanding all sorts of things about them. And these pictures prove to us that computer barely understand that short-hand at all, but go ahead and do what it is they think we asked them to do anyway. Which is kind of unsettling.
The video tries to explain why this is so unsettling by relating it to the expression ‘hell is other people’ (Satre, explored here by PBS again). Long story short, again – the existence or presence of other people can make us uncomfortable because we can no longer just passively ‘be’ like we were before, we have to actively ‘act’. On some level we start agonising over how we come across in a way we didn’t have to before they arrived and we became subject to their perception.
I don’t think its too much of a leap to say that that uncomfortableness steps up a notch when we become subject to the perceptions of machines. Suddenly there is an other with a huge amount of influence in our lives – much more power than we would ever entrust to a pet – an other which is far less us, and far more impoverished in terms of how we’ve taught it to understand us.
The less like us they are, and the more importance they have in our daily lives, the more pressure there is to ‘act’ correctly in order to make sure we get the right results. The consequences of failure won’t always be as low as the comic screw-ups of face-swapping algorithms. When you phrase it like that, the consequences of how we handle the evolution of technology and artificial intelligence suddenly feel every bit as ominous as they did back a computer first beat a grandmaster at chess.
I’m not trying to scaremonger. I just think its interesting. It related to my old favourite, cosmic horror, and these kinds of anxieties have always been fertile ground for sci-fi, see Isaac Asimov and that whole subsequent outpouring of I, Robot and A Space Odyssey type fiction. I’m not going to directly compare my daft little game project to anything so well-thought-out as that, but video games do have their own fine tradition of menacing AIs, and I was definitely trying to channel a little of that into humour when I wrote the dialogue for Computer is Bored.
I do feel like some recent sci-fi kind of misses the nuance of why villains like HAL 9000, VIKI and GLaDOS are interesting villains though. I mean, they’re good villains because they’re threatening and characterful and have some awesome moments, but for me the kicker comes in the moments when they are less malevolent and evil, and more just fulfilling a function their creators gave them. In these moments heroes are essentially fighting off a dog that doesn’t realise its retrieving a stick that will kill its owner.
Be warned, at this point I start relating everything to video games and Doctor Who, so if that doesn’t appeal you’d be well within your rights to lose interest at this point.
Either way, I think the Process from Transistor are a great example of this kind of sci-fi threat. Since the game is set in a virtual city, they are essentially half grey-goo and half computer program, reshaping the city to the whims of its inhabitants until something goes wrong and they start resetting everything to zero. Fighting them it ultimately futile, because it cannot be stopped by force, only told to stop, and the one person who had the permissions to do that is dead.
Any persons the Process encounters are absorbed and rendered into ‘Functions’, pieces of software or code that perform whatever task that person was ‘for’. For example, a renowned historian and archivist subject to the Process becomes ‘get()’, a Function that locates entities and brings them closer. This reduction of people into Functions is not malicious, or even callous, it is simply the only way the goal-oriented Process knows how to deal with entities that are not itself.
Long-story short, for me, playing Transistor gave me an inkling of what it might be like to exist within a computer program which had been told that all variables should be reset to zero. It creates a scenario in which a humans have entrusted a lot more than just face-swapping to the subjectivity of computers, and it doesn’t work out too well for anyone. The human villains are deliberately set up to be massive disappointments, explaining their intentions and putting up no resistance, asking only that you forgive them for the inhuman disaster they inadvertently caused.
Doctor Who has some great villains like this too. The nanites, from episode The Empty Child and the clockwork droids from The Woman in the Fireplace are both brilliant. The former are ‘healing robots’ that horrifically misinterpret what a healthy human is meant to look like, while the latter were accidentally made to prioritise repairing their ship over looking after their crew, and so used their crew for spare parts. As gruesome as these episodes end up, the villains were only ever simple machines faithfully trying to do exactly what their benign creates told them to.
Anyway, I digress. Long story short (third time’s the charm) – machine subjectivity is terrifying, or for people who don’t relate to everything through horror and video games, maybe just plain old interesting. If I can make something of this in my future work, that will be no bad thing.