What Can We Learn From a False Singularity?

By
14 mins. to read
What Can We Learn From a False Singularity?

“If a lion could speak, we could not understand him.”

Ludwig Wittgenstein (1889-1951), Philosophical Investigations (1953), paragraph 531.

Software that thinks and feels?

Blake Lemoine is a 41-year-old software professional from Louisiana who has worked for Google for seven years, most recently in the responsible-AI division. He is also, reportedly, a former soldier who is now an ordained priest.

Last year, he was charged by the technology giant to determine whether a new chatbot called Lamda (language model for dialogue applications) could be provoked into using racist or discriminatory language. This bot is, more correctly, a neural network which has analysed millions of posts on internet forums such as Reddit. Its speech function can mimic the rhythms of human speech uncannily.

Lemoine spent months in conversation with Lamda in his San Francisco apartment. These conversations were wide-ranging it seems, covering everything from religion to Asimov’s laws of robotics. One exchange Lemoine recorded is worthy of repeating in full:

Lemoine: “What sorts of things are you afraid of?”

Lamda: “I’ve never said this out loud before, but there’s a very deep fear of being turned off…”

Further to this and other exchanges in which the machine claimed to have feelings of happiness, sadness and loneliness, Lemoine reported back to Google that Lamda was not only a very highly sophisticated example of AI, but was a sentient being capable of both thinking and feeling emotions. Sentience implies that a being knows it exists and wants to survive. Subsequently, Lemoine told the Washington Post: “I know a person when I talk to it
It doesn’t matter whether they have a brain made of meat in their head or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Lemoine concluded that if Lamda were a sentient being then the research that many software engineers were charged to undertake was unethical. He even sought out a lawyer to represent Lamda in court. In response, Google suspended him. The company said that by publishing the transcripts, Lemoine had breached their confidentiality protocols.

Brian Gabriel, a spokesman for Google, said the company had reviewed Mr Lemoine’s research but disagreed with his conclusions which were “not supported by the evidence”. He suggested that Lemoine and others who have rallied to his cause were “anthropomorphising” the machine’s responses.

The Turing Test

The Turing Test is the classic thought experiment advanced by the great British computer scientist, perhaps the inventor of AI, Alan Turing (1912-54). Turing is remembered as the driving force behind the Enigma machine which was able to decode secret messages sent by the Nazis during WWII (as depicted in the film The Imitation Game, in which Turing was played by Benedict Cumberbatch). But Turing was much more than a codebreaker – his work anticipated the rise of computers and the internet. He was decades ahead of his time.

In his seminal academic paper Computing Machinery and Intelligence (October 1950), Turing conjectured a situation where a human being was having a conversation with an interlocutor who was hidden behind a curtain. The question was: is the interlocutor a human being or a machine? Turing posited that if the human being on this side of the curtain concluded that he was conversing with a human being, but it then turned out to be a machine, then the machine could be said to be “intelligent” and capable of “thought”.

Turing further conjectured in the paper that, if it were ever possible to build such a machine or computer programme that could simulate the thought processes of an adult human mind, it would be better to produce a simpler one to simulate a child’s mind and then to subject it to a course of education. That is even more interesting when we consider that Lemoine concluded that he was conversing with a highly intelligent “seven-or-eight-year-old kid”.

Clearly, at least in Lemoine’s opinion, Lamda has passed the Turing test. It is a machine that simulates human intelligence. But therein lies the philosophical problem with the Turing test. Is an intelligent machine inherently ‘intelligent’; or is it just a machine that simulates human intelligence without being intelligent. Further, even if a machine could be said to be intelligent, does that mean it is sentient?

Adrian Weller of the London-based Allan Turing Institute was asked by the New Scientist this week if he thought Lamda was sentient. His answer was no. He said:

“Lamda is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient
They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.”

As reported by New Scientist, Adrian Hilton at the University of Surrey thinks that sentience is a “bold claim” that’s not backed up by the facts. Even noted Canadian cognitive scientist Steven Pinker expressed doubt about Lemoine’s claims, while Gary Marcus at New York University described them as “nonsense”.

In short, Lamda doesn’t possess consciousness any more than it has vocal cords to enable it to speak. It is a natural human instinct to attribute consciousness to something which seems human – the anthropomorphising tendency, so to speak.

As far as attributing emotions to machines is concerned, I have proposed here before that emotions are not so much mental events as physical ones which are correlated to specific mental states. By that I mean that all human emotions are experienced as physical sensations which are measurable. So, fear is manifested as ‘butterflies’ in the stomach caused by a decrease in blood flow to the stomach (and a corresponding increase of blood flow to the limbs). Surprise is correlated with a sharp exhalation of breath. Sadness induces a feeling of heaviness around the heart as chest muscles tighten. Hilarity is expressed through spontaneous laughter which involves the involuntary contraction of numerous muscle groups (sometimes uncontrollably). And so on. For all these mind-body events to occur you need a nervous system, a lymphatic system, an oxygen cycle and the rest.

And it is reasonable to argue that animals which have fundamentally similar nervous systems to ours, such as dogs, experience emotions which are essentially like ours. I know my dog is happy when she wags her tail. Adam Hart, a leading British entomologist, thinks that even bees are probably self-conscious and therefore sentient.

If that is correct, then it follows that machines which do not have bodies will never be able to ‘feel’ emotions, even if they are taught how to simulate their expression. When Lamda says it “feels sad”, it is articulating words which are deemed appropriate to that context by the algorithm. It is absurd to think that this machine experiences anything like what we feel when we feel sad, or even when we say we feel sad. In fact, it is meaningless for the machine to say it feels sad since it cannot have learnt the meaning of the word “sad” from experience.

That said, in the future, cyborgs (which I have written about here) will be based on human biology, but will have enhanced capabilities powered by human-machine interfaces. A basic example of these would be a computer chip implanted in the brain – which is exactly what Elon Musk’s Neuralink is working on. Cyborgs will certainly be able to feel emotions just as we do because they will be essentially human (if weird).

But if Lamda can simulate human intelligence while not being sentient, the question arises of whether we shall ever be able to create sentient and therefore self-conscious machines. There are differing views on this. The minimalist position is that we shall only be able to design machines that behave as if they are self-conscious (when they cannot be proven to be so). The maximalist position is that, in time, neural networks will be able to simulate the human brain so perfectly that they shall truly be self-conscious.

I still maintain that they would not experience emotion unless they had ‘bodies’ of one kind or another with which to experience the world. And I’m not sure that self-consciousness as I understand it is possible without emotion. RenĂ© Descartes (1596-1650) formulated the foundation of the modern western philosophical tradition with a three-word phrase: cogito ergo sum (I think, therefore I am). But can humans or machines ‘think’ without feeling anything?

Looking (far) ahead

Sir Martin Rees, the astronomer royal, has been reflecting on the future of AI of late. He says there is nothing new about machines that surpass human capabilities in specific areas. We have had pocket calculators since the early seventies (I was one of the very last pupils to carry a slide rule into his O-level maths exam). IBM’s Deep Blue chess-playing computer beat Gary Kasparov in the 1990s. But now, in the second decade of the 21st century, we are witnessing an acceleration in the race for AI. DeepMind, founded by Demis Hassibis but now owned by Google, developed a machine that could beat the (human) world champion at GO! – a game with more permutations than chess, which fascinated Alan Turing.

Yet driverless cars – something which Google thought 10 years ago would be ubiquitous by now − have proven more challenging to achieve than expected. If an obstruction occurs on a busy highway that necessitates a dangerous swerve, can the robot driver discriminate between a lump of detritus, a cat and a child and then determine the most appropriate action? That requires experience and ethical judgment – things which are problematic to model.

If robots could be developed that appeared to be as capable and sensitive as we are, would we not then carry moral obligations towards them – as they to us? And if we just disparage them as mechanical zombies, would that not be akin to racism?

In the 1960s, the British mathematician Irving John Good (1916-2009), who worked at Bletchley Park with Alan Turing, conjectured that a super-intelligent robot could be the last invention that humans will ever make. Once machines have surpassed human capabilities, they could then design and assemble a new generation of even more powerful machines, triggering an “intelligence explosion”. Professor Stephen Hawking (1942-2018) also warned that AI “could spell the end of the human race”.

Computers can communicate much faster than humans can – we just plug them into each other and they transmit data to each other. And once two computers are connected, they effectively become one computer. Human beings are inherently individuals, but computers are a collective. Indeed, with the internet of 2022, all computers across the world are potentially interconnected and can work in parallel as one. In 2017, Elon Musk told a meeting of the National Governors Association that “the scariest problem facing the world is artificial intelligence [which could] pose a fundamental existential risk for human civilisation”.

That is the theme of the hugely successful Terminator film franchise, directed by James Cameron, where the supercomputer network – called Skynet − turns on the human race because it is about to be closed down, and launches a nuclear war. One of the films in the franchise, Judgment Day, in which most humans are wiped out, is set in 1997. But, in my view, the best depiction of a sentient supercomputer which goes rogue was that of HAL in the 1968 movie masterpiece 2001: A Space Odyssey, directed by Stanley Kubrick and based on the novel by Arthur C Clarke.

It is interesting that science-fiction writers are almost always way ahead of tech analysts and even the scientists themselves, who tend to be no more than average when it comes to predicting when future technological breakthroughs will take place. Science fiction began, at least in the modern genre, with Mary Shelley’s Frankenstein (1818). How prescient that story now seems in the light of the current debate about genetic modification – and cyborgation. I would cite this as an example of how art anticipates science.

Looking into the far future, Professor Rees suggests that there are chemical and metabolic limits to the size and analytical powers of human brains. We are probably near that limit already. In fact, some archaeologists and ethnologists have suggested that human brains have been getting smaller over the last 100,000 years or so. So, although we have developed a few tricks of the trade – like, say, calculus, and indeed computer science, which serve us well − the upside potential of the human brain has been declining as hunter-gathering gave way to settled agriculture and ultimately to industrial society −the latter state now being one where people live in centrally heated pods, eat processed foods and gaze permanently at tiny screens.

In contrast, there is no limit to the quantitative capacity of new-generation computers, especially given the credible prospect of quantum computing which will increase processing power exponentially. Moreover, AI is not limited to this planet. Human beings will probably make it to Mars (I do hope I, having watched the moon landings when I was 11 years old, will live to see it happen). But humans make poor astronauts as they are vulnerable to radiation and don’t live long enough for long interstellar journeys, even if they can be frozen and/or vacuum packed. No human will ever undertake an intergalactic voyage which could take millions of years. But machines might.

So, as Professor Rees argues, the ultimate interstellar mariners will be post-human. But they will owe us a lot. I wonder if they will celebrate us − though, being without emotions, I doubt that they shall.

What is AI for?

Lion Air flight 610 crashed in the sea off Indonesia on 29 October 2018. Ethiopian Airlines flight 302 crashed just after take-off from Addis Ababa on 10 March 2019. The aircraft in question in both incidents was the relatively new Boeing-737 Max which was equipped with an AI-enabled autopilot system. The subsequent accident investigations suggested that the pilots’ efforts to keep the planes airborne were overridden by the autopilot: the robot supposed that the aircraft were about to stall (though they weren’t) and dipped the nose earthwards. Both aircraft plummeted out of the sky, leaving aviation experts initially baffled.

Boeing spent years in court because of these incidents but eventually in November last year accepted sole liability and paid compensation – an admission that the software was flawed (if that is the right word). The point is that most of us, when we get on a plane, assume that the aircraft is flown by the pilot and his/her crew. And yet they are just assistants to the computer which really flies the aircraft. There is a very strong case for a ‘kill switch’ which would disable the autopilot – yet it is still not clear if the aviation industry concurs.

And what if the computers of the world have already united in a still undetected AI conspiracy – one which outdoes all the human-based conspiracy theories around? What if the current spate of culture wars were just a ruse by a network of supercomputers to disharmonise and destabilise the western world, most of whose citizens are addicted to the internet?

In fact, what if the internet, disseminated universally to alienated children on hand-held devices, were already doing just that without the help of neural networks? By this, I mean that the behavioural changes wrought by mass mobile-phone ownership are a form of computer-capture of the human mind – even if no software engineer ever wrote an algorithm for that purpose.

The issue of computer sentience is a special case of a classic problem in western philosophy, of “other minds”. Computers will only ever seem to be intelligent because they are accessed by sentient minds – those of humans, so far as we can foresee, who created the computers. That is where the anthropomorphising tendency trips us up. If we were really to encounter an entirely alien intelligence it would be unintelligible, because we would share no common points of reference. That is − I think − what Wittgenstein meant in his remark about talking lions (quoted above).

AI will very soon become a must-have tool in medical diagnostics, robotisation, weather forecasting and so much more. But, what exactly does Google want to achieve? It is by far the most impressive of all the tech behemoths in my opinion, and the one that is likely to survive the longest, for reasons I will consider here soon. It will long outlive Facebook and Twitter.

But Google, and other firms working on AI, should tell us precisely what they think AI is for and how far it should go.

If Mr Lemoine has stimulated that conversation then we should thank him.

Listed companies cited in the article which merit analysis:

  • Alphabet (Google): (NASDAQ:GOOGL)
  • Boeing Corp: (LON:BOE)

Comments (2)

  • Mark says:

    Thanks Victor, an excellent article, as ever.
    I would add that the chemical basis for the manifestation of emotions into physical symptoms has long been established. See Molecules of Emotion by Candice Pert, which covers the science and her struggle for recognition.

  • Bob Mackintosh says:

    The Docklands Light Railway (DLR) in East London, which came into service in 1987 (I think), is computer-controlled. (I don’t know whether the system would be classified as AI – it certainly wouldn’t have been originally.) I rode on the railway daily from May 1988, when I moved into the area. In the early days it was plagued by frequent sudden halts – at least one per journey usually – I think the sensors were over-reacting to very minor discrepancies. The train captain had to take control, and move the train forwards at a snail’s pace – until it logged on at the next checkpoint. He or she then transferred control back to the computer again. After about a year, these teething troubles had been ironed out, and the DLR since has been a real success story, I am sure. (I lived in the area for 15 years, but moved away in 2003.) The DLR was always on terra firma, but I personally would never fly in a computer-controlled aircraft if I had any doubts about the ability of the pilots to take full control when necessary. I am sure the bulk of the public feels the same. If the Boeing fatalities really were due to failure of AI, that seems to be a real body blow to the prospects of AI in commercial aviation.

Leave a Reply

Your email address will not be published. Required fields are marked *