Redefining intelligence
The human mind is much more than just the brain. That is the idea which is challenging the way scientists now think about artificial intelligence.
Traditionally, scientists and philosophers have tried to define the human mind in terms of brain activity, the brain being, as some bright spark once quipped, a computer made of meat. What we are, what we think, feel and remember arises out of the firing of neurons and the interconnection of synapses in that glorious bowl of porridge inside your head. As a result, computer scientists have tried hitherto to replicate brain activity in machines.
But we have known for a century or more that much of our physicality emerges from the nervous system which, while connected to the brain via the spinal column, functions within the body. It turns out that “decisions” to flex certain muscles to a specific degree are taken by receptors in various locations within our bodies. Once we have learnt to walk or talk we don’t even have to think about these actions in order to do them: but they involve incredibly complex muscle coordination which robot manufacturers have found difficult to replicate.
Moreover, when we feel emotions – sadness, anger, fear, regret, frustration – these are always associated with bodily manifestations of those emotions. So, the “butterflies” I felt in my stomach as a schoolboy before exams correlated with a tightening of muscles in my abdomen. Poets have long since talked of broken hearts – and people who experience grief have physical manifestations of their loss in the tightening of muscles in the chest. Thus, we cannot talk about “feelings” as just a brain activity – the switching on and off of particular neurons in the 86-billion average human brain. They always involve physical activity as well because we have bodies.
Without our bodies, could we feel emotions at all? I suspect not. But without social interaction and conversation (the etymology of that word is rich), could humans even think? Again, I suspect not.
Now, particularly in western civilisation, we have inherited some notions which inhibit us when we come to think about what intelligence is.
The first idea is Cartesian dualism – the idea, hugely influential for centuries, that there is the ethereal, unextended mind which thinks; and then there is the physical world of matter outside it. This idea has all kinds of offshoots such as the ghost in the machine – an unscientific or animistic way of understanding how machines work which persisted from the 16th to the 19th centuries.
The second idea is rationalism – that thinking rationally and feeling emotions are two quite separate activities, the first being conducive to intelligent analysis and the second being the opposite. Then there is the attendant cultural baggage around rationalism which, thankfully, we have now largely jettisoned: for example, the idea that men are rational, and women are emotional – and other such cultural tropes.
These two ideas together have driven a sense that human beings each have entirely separate experiences and existences, whereas East Asian culture emphasises the primacy of shared experience. Apparently, people who think that the human mind is solely a function of brain activity tend to feel lonelier that people who believe in some form of collective consciousness.
Dr Dan Siegel, Professor of Psychiatry at UCLA School of Medicine and the author of Mind: A Journey to the Heart of Being Human (2016) came up with a new definition of the mind more than two decades ago at an eclectic meeting of 40 neuroscientists, physicists, sociologists, and anthropologists. The aim was to come to an understanding of the mind that would satisfy all these scientific disciplines. After much deliberation, they decided that a key component of the mind is:
The emergent self-organizing process, both embodied and relational, that regulates energy and information flow within and among us.
Dr Siegel argues that it is impossible completely to disentangle our subjective view of the world from our interactions with others (as studied by social scientists and anthropologists). This idea has been substantiated, believe it or not, by mathematical modelling. The mathematical definition of a complex system is that it’s open (can influence things outside itself), chaotic (meaning it’s randomly distributed), and non-linear (meaning a small change in one input can lead to a radically different outcome, making prediction difficult).
Johannes Kleiner of the Munich Centre for Mathematical Philosophy and Sean Tull of the University of Oxford have even been trying to construct a mathematical model of consciousness. Philosophers have been agonising about the nature of consciousness for millennia: neuroscientists only joined the party at the beginning of the last century, though consciousness is still regarded by many thinkers of all disciplines as an intractable problem.
Integrated Information Theory (ITT) was conceived over a decade ago by Dr Giulio Tononi, a neuroscientist at the University of Wisconsin. His insight was that consciousness arises from the way information moves between “subsystems”, where a subsystem in the human brain might be a population of neurons. For consciousness to arise, Dr Tononi argued, this information flow must be complex enough to make the islands interdependent. When the system is working consciously it can integrate information. This consciousness is translated by the human mind into experience. He even conceived of a quantifiable coefficient of consciousness called phi.
Animals that can recognise themselves in the mirror (I’m not sure that Man’s Best Friend can, though chimpanzees certainly do) seem to be the most intelligent. That is the first link between consciousness and intelligence. Damage to the cerebral cortex, which is most developed in humans, leads to reduced consciousness; while damage to the cerebellum (the reptile brain) has little impact on consciousness, though it might affect balance or other autonomic functions.
Scott Aaronson, a theoretical physicist at the University of Texas at Austin, has applied the mathematical principles of IIT theory to non-human “arrangements of information”. He concluded that IIT identifies consciousness in “physical systems that no sane person would regard as conscious at all”. One of these arrangements might even be the universe itself.
The idea that the universe itself is conscious is not a problem for people of a religious persuasion. Only believers tend to call that universal consciousness “God”. But that is another much loftier conversation than this one.
Why does all this matter?
Human-like artificial intelligence is based on the concept that, somehow, computers might be designed to replicate human cognitive processes. But the new thinking is that it will be impossible to replicate human intelligence without tackling the issue of embodiment – humans have bodies as well as brains; and until machines have bodies as well as brains, they will not be able to experience and thus will not be able to think.
Computer scientists now realise that we cannot replicate human intelligence by building logical systems that manipulate symbols alone. Symbology is intrinsically ambiguous and subject to interpretation. (That’s also an issue in religion, by the way). It’ true that machine learning can achieve a high degree of pseudo-intelligence. (Google searches are quasi-intelligent is so far as they anticipate what you are looking for). But machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual components in images, or transcribing recorded speech into text. It might be called machine learning, but it is not intelligence.
Yes, machines can play better chess than you do – and recognise speech and faces in a way that no human could ever achieve. The Chinese technology company Watrix claims even that their software can identify a person by their gait, with accuracy of up to 94 percent. Computers can even “compose” music based on how to combine sounds in systemic patterns. But computers and even robots, such as they are, cannot gain information from experience. The only “experience” they have is data downloaded into them by their human minders.
In contrast, we are cellular organisms who benefit from an extraordinarily complex genome which has evolved over literally billions of years. And we think with our whole amazing bodies, not just with our brains. We process experiences to predict likely outcomes from a relatively small number of observed samples. When we think about cats, we picture the way they move, hear their purring, and watch out for their sharp claws. A computer can be programmed to recognise “cats” by being given millions of phots of cats which it can analyse in multifarious ingenious ways. But it still won’t know what a cat is.
That doesn’t mean that machine learning is without nuance. Max Tegmark, a professor in AI at Massachusetts Institute of Technology (MIT) in collaboration with the psychologist David Rand, have been working on a machine that scans news articles and which can classify them and even identify fake news. Professor Tegmark thinks that the challenge is to ensure that human “wisdom” grows as rapidly as technology.
The greatest advances in machine learning recently have come from artificial neural networks. The problem is, says Professor Tegmark, that these networks have become black boxes – we know the output is incredible, but we don’t fundamentally understand how they work. He cites the automated piloting system on the Boeing 737 Max as a black box that killed (twice, in fact).
For all that, Tegmark thinks that we shall only ever get to understand the human brain by replicating it. He believes that eventually machines may experience consciousness, see colours, hear sounds and possibly even have emotions. That entails that machines might be able to suffer in ways parallel to how humans do – and that has mind-boggling ethical implications.
Robotics versus cyborgation
I reported some time ago that Elon Musk, the founder of both SpaceX and Tesla, set up Neuralink because of his concern that AI would outpace human intelligence rapidly in the years to come. His idea was that if we could enhance human intelligence by embedding chips in our brains then we humans stroke cyborgs could outgun the beastly machines. Neuralink has reportedly $158 million in funding and already employs 100 people.
At the end of August, we learnt that Neuralink had inserted a coin-sized computer chip into the brain of a pig called Gertrude. A robot surgeon replaced a portion of the pig’s skull with the disk and connected to its brain by hundreds of filaments. The implant registers nerve activity and relays the information via Bluetooth to a device such as a mobile phone.
The animal swiftly obtained the soubriquet Cypork. Mr Musk described her as a healthy and a happy pig – but one which can respond to electronic signals. In humans, Neuralink claims that future implants could be used to restore mobility to the paralysed, sight to the blind, and might alleviate hearing problems, memory loss, depression and insomnia.
But would Mr Musk have one fitted to his own brain? He told a press conference: “I could have a Neuralink right now and you wouldn’t know. Maybe I do”. Most people took this to be a joke. But he also conceded that security would be a major issue and that Neuralink was working on it. The idea of one’s brain being hacked could be concerning for some.
Neuralink will start human trials soon – it was granted a preliminary green light by the FDA in July. The first trial patients will be people with spinal cord injuries – paraplegics and tetraplegics. No doubt we shall hear more on this in due course. What is very exciting is that the cost of this kind of intervention could be much more affordable than conventional treatments.
If all this sounds like science fiction, consider that cyborgs already exist. In the March 2019 edition of the MI magazine I wrote about cyborgs as a phenomenon that is coming whether we like it or not. But in August this year it was revealed that an AI expert and scientist suffering from motor neurone disease, Dr Peter Scott-Morgan, had become the world’s first cyborg, having been fitted with an exoskeleton which encases his entire upper body. He wants a chip to be implanted in his brain connected to an avatar display that will show expressions on his digital face as he communicates.
A documentary about Dr Scott-Morgan was aired on Channel 4 on 26 August entitled Peter: The Human Cyborg. His message was that he doesn’t want to be thought of as a deteriorating human but rather as new cyborg version of himself.
I speculated in the magazine article that robotics and cyborgation (my word for the long-term process of enhancing the human body with technology) will eventually merge. Of course, pure robots will be very useful in space exploration – for example, exploring the outer planets. But until robots acquire nervous systems – from us – they will never be truly intelligent.
Thinking ahead
Your laptop computer or mobile phone, on which you are reading this article, is a dumb physical object sitting on your desk or in the palm of your hand. But you could also regard it as an extension of your own mind which enables you access information across the globe. If the device could somehow be embedded inside your head, would that make you less human?
The late Professor Stephen Hawking (who spoke via a computer) predicted towards the end of his life that artificial intelligence would be our undoing – super-intelligent machines would tire of us, and, entirely amoral, they would then seek to destroy us. Cyborgs, on the other hand, being mostly or partially human, will have scruples.
One thing that may have surprised non-scientists during the coronavirus pandemic is that scientists can disagree with one another. Oxford’s Professor Sunetra Gupta has been much more sceptical about the efficacy of lockdowns than Imperial’s Professor Neal Ferguson or Chief Medical Officer Professor Chris Whitty. Most people imagine that science is entirely objective – and indeed it must be based on evidence and experimentation. But scientific activity is also culturally framed – where scientists go to find the evidence is often based on subjective criteria.
Another thing we have learnt this year is that the type of intelligence manifested by this generation of politicians (and their civil servants) may not be fit-for-purpose in an age of rapid technological change, shifting geopolitics, climate change and pandemics…They rely far too much on verbal reasoning alone – and they are very good talkers, to be sure. But what is needed is deep analytical and keen risk management skills.
Perhaps cyborgs would make a better job of things.