“2001” and the dangers of programming

One of my favourite movies is Stanley Kubrick’s 2001: A Space Odyssey. I am sure many of you, if you are aware of this movie at all, find it overlong, obscure, confusing and difficult – and largely I agree with you. But it is brilliant anyway. There has never been another movie like it, and I doubt ever will be. It is the closest the cinema has ever come to creating something truly unique, a symphony in film, with its four separate movements, its brilliant and defining use of music (can you now hear Strauss’ Thus Spake Zarathustra without thinking of it as ‘the 2001 theme?’), and its astonishing special effects, which impress even now, but are almost unbelievable when you remember they were created in the mid-1960s, before we even knew what the Earth really looked like from space.

Anyway, the reason I mention it here is that I want to use it as the basis for an exploration of how system design constrains the way we think. This is an important topic when it comes to the investigation of self-organised learning. I’ve also recently read Mary Douglas’ 1986 book, How Institutions Think, which provoked me into making these connections.

HAL

HAL’s ‘eye’, or interface – a constant motif in the film

The overriding theme of 2001 is humanity’s dependence on technology, and how our use of tools in certain ways has propelled us forward into new stages of development, but also changed who we are and how we operate within the world. There is a spiritual aspect to this in the movie, with the recurring black monolith and the idea that these developments have been provoked by alien intelligences, but we can ignore that here. The important element for my purposes is the role played by HAL 9000, the super-computer that is – quite deliberately – the dominant character within the third of the movie’s four movements.

This section takes place on a spacecraft travelling to Jupiter, tracking a radio signal that was broadcast at the end of the second movement, from a monolith uncovered on the Moon. The craft is crewed by only two men, Frank Poole and Dave Bowman, though there are three more in suspended animation. Poole and Bowman are depicted throughout almost as automatons, with very little character and emotion. It took me several viewings of the film to notice that apart from one, very significant, conversation (see below), the two men actually never interact directly with each other. All their interactions, whether on the ship, or with mission control or family members back home, is mediated through technology – specifically, through HAL. They talk to HAL, play chess with him, use him as a way of accessing news and information back home – but not to each other. (I remind you again this was a movie made in the mid-1960s, so its prescience is astonishing.) HAL is fully in control of the spacecraft, keeping it on course, maintaining life support both for the two conscious men and the three in suspended animation, and – as we find out later – he is also privy to information about the real purpose of this mission, information of which Poole and Bowman are not aware.

At one point HAL and the two astronauts are interviewed for a television programme. Bearing in mind the fact that without him, the craft simply could not operate, there is a delicious irony in the way the interviewer expresses this question:

INTERVIEWER: “HAL, despite your enormous intellect, are you ever frustrated by your dependence on people to carry out actions?”

Poole and Bowman

Poole and Bowman on board, watching TV (through HAL)

HAL: “Not in the slightest bit. I enjoy working with people. I have a stimulating relationship with Dr Poole and Dr Bowman. My mission responsibilities range over the entire operations of the ship, so I am constantly occupied. I am putting myself to the fullest possible use, which is all, I think, that any conscious entity can ever hope to do.”

HAL is also asked:

INTERVIEWER: “HAL, you have an enormous responsibility on this mission, in many ways perhaps the greatest responsibility of any single mission element. Does this ever cause you any lack of confidence?”

HAL: “Let me put it this way… the 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.”

The pride that he expresses in his answer leads the interviewer to ask Dr Bowman:

INTERVIEWER: “In talking to the computer, one gets the sense that he is capable of emotional responses. Do you believe that HAL has genuine emotions?”

BOWMAN: “Well, he acts like he has genuine emotions. But as to whether or not he has real feelings… I don’t think that’s something anyone can truthfully answer.”

It is around this issue of whether HAL has emotions that the whole sequence then swings, and this is the point of my discussion, as well. For all his surface human qualities – as Bowman says, he acts like he has emotions – he remains programmed. He is a system, designed to perform certain tasks in certain ways. The pseudo-emotions that he manifests have been programmed in to help the two crew members interact with him. But what Kubrick and his co-writer, Arthur C Clarke, are investigating here is what happens when a machine is made so intelligent that it begins to manifest real emotional responses – that is, inputs that it is not designed to deal with.

The crucial moment – depicted with immense subtlety, but nevertheless a clear and single moment at which, irrevocably, things change inside HAL for ever – comes when he mentions to Bowman whether Bowman is aware of “rumours about things being discovered on the Moon”. These rumours, as the audience already knows (but Bowman does not), are true, and are what has led to the Jupiter mission being launched (as this planet was the destination of the radio transmission made from the Moon object). HAL is aware of this information, and it is because of this discrepancy between what he knows and what Bowman knows (or does not know) that he is hit by a genuine human emotion – doubt.

It is right here that HAL begins to collapse: the system breaks down. Again, the process is shown by Kubrick and Clarke with great subtlety but nevertheless it is irrevocable, and immediate. At the moment he cannot process the unexpected input – doubt, the sudden concern that things are not as he was told they were – he makes a mistake, identifying a component on the ship that he believes is about to fail, but which turns out to be operating normally. Right then, the two astronauts themselves begin to doubt HAL, because this model of computer is supposed to be infallible.

In the pod

Poole and Bowman conversing in the pod, where they think HAL cannot hear them – but he is visible in the background and can see their lips move.

It is at this point that Poole and Bowman have their one and only conversation with each other. But because it is about HAL, and they do not want him to hear it, they retreat to what they think is a secluded space on the ship where they cannot be heard. They express their own doubts about HAL’s ability to continue to run the ship, but also refer to him once again as if he has real emotions:

BOWMAN: “Another thing just occurred to me. As far as I know no 9000 computer has ever been disconnected.” POOLE: “Well, no 9000 computer has ever fouled up before.” BOWMAN: “That’s not what I mean. I’m not so sure what he’s going to think about it.”

They are right to express these concerns. Despite the precautions they take to avoid being overheard, HAL can see them through a window, and is lip-reading them, so realises they intend to turn him off. (The picture shows this moment – one of HAL’s terminals, with his glowing red light, is visible in the background, and it is through this that he lip-reads the men’s words.) He reacts by jettisoning Poole into space and then trying to trap Bowman outside the ship when he goes to rescue his colleague. Bowman is able to get back in through using his creativity and ingenuity, and does eventually turn HAL off. The human ‘wins’, but at the cost of four lives and the near-termination of the mission.

The point of all this? What we are seeing in this sequence is how a programmed system fails when it is asked to process an unexpected input: an input that, in this case, emerges because of a simple omission in how the system has been programmed (that is, its failre to allow for the fact that HAL knows something which Bowman does not, and which therefore makes the computer doubt the veracity of its information). And this is a system that is utterly central to the lives of the people who depend on it: literally, in that it is keeping them alive (the three crew members in suspended animation die when HAL stops maintaining their life support as part of his breakdown).

Forty-five years after 2001 was released into cinemas, and twelve years after the date itself, we have not developed space stations on the Moon, the ability to fly people to Jupiter, to put them into suspended animation for months on end. Nor have we built computers like HAL which can converse with us and appear to express genuine emotions (that is, they could pass the ‘Turing Test’).

Interior of 'discovery'

The interior of the ‘Discovery’. The three crew members in suspended animation can be seen towards the top of the shot.

However, a great deal of our lives is now as completely dependent on programmed systems, as much as Poole and Bowman’s were. Automated algorithms now buy and sell stocks and shares in immense quantities every second of the day. Sat-navs direct trucks around our highways, the drivers mostly just automatons following routes that are programmed in according to calculations of cost- and time-effectiveness: the containers on the back of these trucks filled with goods and components sorted by inventory-control programs. When we phone the bank, or the electricity supplier, we hear a human voice but really it is a computer interface, and if it does not offer us the option we need, we either have to fit our query into the form the system demands, or hold until a ‘human operator’ becomes available – who more than likely, will read to us from a pre-determined script anyway.

In 1986 Mary Douglas wrote about how the institutions, through which we organise our lives, exert an intense – and often invisible – pressure on the way we think, the way we process information, and ultimately, the way we live. She writes (p. 92):

Institutions systematically direct individual memory and channel our perceptions into forms compatible with the relations they authorise. They fix processes that are essentially dynamic, they hide their influence, and they rouse our emotions to a standardised pitch on standardised issues. Add to all this that they endow themselves with rightness and send their mutual corroboration cascading through all the levels of our information system. No wonder they easily recruit us into joining their narcissistic self-contemplation. Any problems we try to think about are automatically transformed into their own organisational problems. The solutions they proffer only come from the limited range of their experience. If the institution is one that depends on participation, it will reply to our frantic question: ‘More participation!’ If it is one that depends on authority, it will only reply: ‘More authority!’ Institutions have the pathetic megalomania of the computer whose whole vision of the world is its own program. For us, the hope of intellectual independence is to resist, and the necessary first step in resistance is to discover how the intellectual grip is laid upon our mind.

(See also the recent work of Ricardo Blaug, particularly his 2007 paper “Cognition in a Hierarchy” and the 2010 book, How Power Corrupts.)

What do we risk when we see, or suspect, a problem – yet cannot express our concerns because the institutionalised and systemic ways of thinking and acting cannot process this unexpected input? What happens when the systems which marshal and govern the resources of our lives are programmed in accordance with principles that we know nothing about, and have not participated in setting? How, as Douglas states, can we “discover how the intellectual grip is laid upon our mind”? How can we avoid becoming subject to “the pathetic megalomania of the computer whose whole vision of the world is its own program” – as it was with HAL? These must be key educational questions for the 21st century.

The trouble is, the more we rely on computer programs and algorithms written by others – and the more that our so-called ‘leaders’ (in fact, a self-interested elite) conceal from public scrutiny the principles on which decisions are taken – the more likely we are to face a huge and irrevocable systemic collapse when these principles are seen to be in error. Kubrick and Clarke’s joint genius came in seeing this 45 years ago. We need to rediscover this now, accept that systems are being created to control, based on principles that are damaging to our lives, our environment and our future – and learn, then enact, modes of active resistance before it is too late. Thought itself, creativity and ingenuity, is actively threatened. Alarmism? Maybe. I hope not.