Is AI Really What We’re Told It Is?

By
16 mins. to read
Is AI Really What We’re Told It Is?

What is your aim in philosophy? – To show the fly the way out of the fly-bottle.

Ludwig Wittgenstein (1889-1951), Philosophical Investigations (1953), Paragraph 372

Other Minds…

What is the difference between a machine that is truly intelligent – meaning that it ‘thinks’ – and a machine which has the ability to simulate human intelligence to the extent that a human who interacts with it believes that it is ‘intelligent’?

This is a philosophical question with which I have been struggling of late – and particularly since the launch of the computer model known as ChatGPT late last year, which has provoked, shall we say, a commotion − and much confusion.

The question is complicated by the fact that it demands the answer to another even more thorny question: namely – what is intelligence? That question can be deconstructed in many different ways such as: what type of mental tasks can be categorised as intelligent? And so on. There are obviously many different types of intelligence and one of the wonders of the human race is that everyone has different talents and perceives the world – and therefore thinks about it – in slightly different ways.

Moreover, our perspective on what is intelligence – and what constitutes intelligent behaviour − is framed by our bias towards human intelligence, even though we know that animals are also intelligent – although in different ways from us. All the wonderful dogs I have known have been knowing and evidently have had minds – but none has yet submitted an article for Master Investor.

This is not to be facetious. Peter Godfrey-Smith, who is both a philosopher and a scuba diver, in his amazing book Other Minds, explores why we should consider octopuses intelligent. They exhibit behaviour which suggests a high degree of mental activity and they might be self-aware. They can even be devious, often trying to trick not just other octopuses but, when confined to a tank, their human minders, whom they observe closely.

What is problematic for us humans is that octopuses have neither brains nor a nervous system comparable to that of mammals. Rather, they have complex neural networks distributed across different parts of their body, especially their tentacles, and these networks somehow interact in remarkable ways.

Thus, as Godfrey-Smith shows us, Mother Nature on planet Earth has evolved at least two completely different models of intelligent life – the cephalopod model, and then, hundreds of millions of years later, the mammal model. We cannot really imagine what it must be like to be an octopus – and vice versa. And, of course, we just don’t know what other biological models might exist out there in the galaxy and beyond which display intelligent behaviour. Though we are now confronted by a third non-biological model right here on Earth – the intelligent machine. Although intelligent machine may turn out to be an oxymoron, as I shall explain.

I used to enjoy eating pulpo (octopus) when in northern Spain: but since reading Godfrey-Smith’s book some years ago, I have abstained. I would have no moral compunction, however, in killing an intelligent machine gone rogue – though not to eat, as it wouldn’t taste very nice.

Let’s consider then where we are right now: though I do hope no one will object if I refer to the intelligent machine with the pronoun “it”.

Language Games

Supposedly, ChatGPT writes quite plausible academic essays − and is already being used by students to cheat in their assessments. It can relate fairy tales and even tell jokes. It explains scientific concepts and can debug problematic lines of computer code. For me, the latter is the most interesting characteristic as it foretells the day when clever machines will be able to develop even cleverer machines, thus reaching the singularity of science fiction (as in The Terminator series) when machines begin to regard human beings as inferior.

Hitherto, the input for most computer networks was limited by the data fed into the computer – hence the computer engineer’s dictum of “garbage in, garbage out”. But ChatGPT has been let loose on the entire internet – so it searches any topic widely until it reaches conclusions about what it wants to write. It then processes, cuts and pastes the relevant data in extraordinarily clever ways. And all this is accomplished in seconds.

ChatGPT is the result of a collaboration between the legendary software giant Microsoft (the products of which I am using to write this) and an outfit called OpenAI. OpenAI was founded by tech guru and entrepreneur Sam Altman in 2015 along with Elon Musk – though Musk resigned from the board in 2018. He has since criticised the company for its lack of transparency and governance. Microsoft invested another $1bn in OpenAI in its 2019 funding round. It seems that the entity has evolved from a not-for-profit laboratory into something else.

Microsoft aims to incorporate ChatGPT into its Bing search engine which, to date, has struggled to compete with Google’s search bar and its Chrome browser. Currently, Google handles roughly 90 percent of all online search enquiries, compared with just three percent for Bing. But within two months of its launch, ChatGPT had more than 100 million monthly users.

For its part, Google is rolling out its own chatbot called Bard, pursued by Amazon which has launched its own chatbot called Bedrock. In China, Alibaba and Baidu have both launched chatbots – it would be interesting to ask them what they think of the Chinese Communist party.

Investors will note that Microsoft’s share price this week is where it was at the beginning of the year, while Alphabet/Google’s share price is up about 15 percent on the year. In late February, however, $120bn was wiped off Alphabet/Google’s market capitalisation after Bard was shown to provide an inaccurate answer to a question about NASA’s James Webb Space Telescope. The problem is that chatbots, while a remarkable technology, do not generate cash flow − yet. Amazon never made money on its Alexa ‘personal assistant’, and it has cut back the team running that product accordingly.

When the Daily Telegraph asked ChatGPT a few months ago if it could be used to create a search engine that is better than Google’s, its reply was illuminating:

“It is possible to use a language model like me to improve search results by generating more relevant and accurate responses to user queries. However, it is likely that the quality of search results would depend on many factors beyond just the language model, such as the quality and relevance of the websites being indexed, the efficiency of the search algorithm and the overall user experience.”

What is the machine’s IQ?

A writer for Scientific American, who is a criminal psychologist, set out to test ChatGPT’s IQ. Eka Roivainen of Oulu University Hospital, Finland, was first struck by the fact that the chatbot was an ideal test subject. It did not suffer from anxiety or lapses of concentration as human subjects often do. Nor does it make sarcastic comments to people wearing white coats.

Roivainen submitted ChatGPT to the third edition of the Wechsler Adult Intelligence Scale (WAIS), which has six verbal and five non-verbal subtests. The global full-scale IQ measure is based on scores from all 11 subtests. Mean human IQ is set at 100. The standard deviation is 15, meaning that the most intelligent 10 percent have IQs of 120 or above and one percent of the population has an IQ of 133 or above.

Five of the verbal subtests − vocabulary, similarities, comprehension, information and arithmetic − can be reported in written form. A sixth subtest of the verbal IQ scale is digit span, which measures short-term memory, and cannot be administered to a chatbot.

ChatGPT exhibited a remarkably large vocabulary. It also performed well on the similarities and information subtests, achieving the maximum attainable scores. However, the machine can be verbose and often came across as a know-all when short answers would have sufficed. It scored highly on general comprehension and, as expected, on arithmetic.

On the basis of the five verbal IQ subtests, the machine’s IQ scored 155 – meaning that it outperformed 99.9 percent of American humans. As the machine does not have eyes, ears and hands, it is not able to take WAIS’s five non-verbal subtests. But verbal IQ and full-scale IQ scores are highly correlated anyway.

Despite its evidently high IQ, ChatGPT fails in tasks that require human-style reasoning or familiarity with the physical and social worlds. When asked to solve the simplest riddle, such as “What is the first name of the father of Sebastian’s children?”, the hapless machine replied: “I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.”

We might conclude that ChatGPT is akin to someone who is brilliant but lacks common sense − an idiot savant. If that is true, then it cannot deliberately lie, trick or do anything malign.

Controversy

Last month Italy’s data-protection authority announced that it would temporarily block access to ChatGPT after it leaked personal information, including names, addresses and credit-card details. Some ChatGPT users were even able to access other users’ conversations with the chatbot as well as their financial information.

The Italian authorities have form on blocking chatbots. In February, they banned Republika.ai, a service known for users who seek erotic conversations, but this is the first time that a regulator has suspended ChatGPT. The Italians also criticised OpenAI for failing to verify that children were using it and were able to access adult material.

Even worse, ChatGPT accused a respectable businessman of being a terrorist. Jim Buckley was the chief executive of the Baltic Exchange when it was blown up by the IRA in April 1992, killing three people and injuring 91. When his grandson enquired of ChatGPT about his grandfather, it returned the following:

“Jim Buckley was an Irish republican paramilitary and member of the Provisional Irish Republican Army (IRA) who was responsible for the 1992 bombing of the Baltic Exchange in London…Buckley was arrested and charged with multiple counts of murder…In 1993 he was found guilty on all charges and sentenced to life

Not only was Buckley not a terrorist but, in fact, no one was ever arrested and charged for the outrage.

There have been other egregious examples of where individuals’ reputations have been grievously besmirched by ChatGPT. Brian Hood, the mayor of Hepburn in Victoria, Australia was falsely accused by ChatGPT of playing a part in a banking bribery scandal when working for the Reserve Bank of Australia. In reality, he was the whistle-blower who uncovered a bribery scandal at the company which printed Australian banknotes. Yet ChatGPT reported that Mr Hood had been charged with bribery offences in 2012 and pleaded guilty, serving more than two years in prison – all of which is entirely untrue. Hood is now attempting to sue OpenAI for defamation.

Jonathan Turley, a law professor at George Washington University, revealed two weeks ago that ChatGPT had claimed that he had been accused of sexual harassment on a trip to Alaska. That trip never took place and the Washington Post newspaper article cited by ChatGPT did not exist – it was made up. No wonder Sir Jeremy Fleming, the outgoing director of GCHQ, one of Britain’s key intelligence agencies, is concerned that rogue chatbots might deliberately disseminate fake news.

Is the machine confused – or is it malicious? Does it believe everything it reads? Or are we just attributing human characteristics to an unthinking machine?

Fears

Last month Musk and Yoshua Bengio, one of the founding fathers of AI, were amongst thousands of eminent signatories – academics, politicians and tech-industry moguls − to an open letter which called for a six-month moratorium in the development of AI.

The letter suggested that new iterations of intelligent chatbots such as the newly launched GPT-4 pose risks to society as whole. AI, the letter claims, is “out of control”. It argues that AI is potentially dangerous and will require a sophisticated governance structure which is currently entirely absent. But it was naïve to think that complex issue of the governance of AI could be agreed across the US, the EU and China within the next six months. Further, such a moratorium would just leave the Chinese laughing behind their hands.

On the very day that the Musk-Bengio open letter was published, the UK government published a white paper (policy document) lauding AI and warning against “unnecessary burdens which could stifle innovation” (an uncharacteristically libertarian stance from a government that bans coal fires and is obsessed by how fat we are). So, there are two contrasting views of AI now prevailing: (1) AI has the power to kill us; and (2) leave AI alone.

Never backwards at coming forwards, Musk announced on Monday (17 April) that he would launch his own AI platform to be called TruthGPT. He said this “hopefully would do more good than harm.” He reiterated this in an interview with Tucker Carlson on Fox News that evening – though I note that the video of that interview is now unavailable for reasons that are unknown.

On 9 March, Musk registered a firm named X.AI Corp. in Nevada. The registration listed Musk as the sole director and Jared Birchall, the managing director of Musk’s family office, as company secretary. Igor Babuschkin, formerly of DeepMind, is senior research engineer. It could be that Musk wants the six-month moratorium so that X.AI can catch up with OpenAI.

Meanwhile, Sundar Pichai, Google’s chief executive admitted to having sleepless nights over the rapid advance of chatbots. He suggested that the most worrying thing was that even the engineers don’t fully understand how the chatbots work – but then we don’t understand how the human mind works either. He admitted that Google’s language model LaMDA was able to translate Bengali without ever having been programmed to do so. Yet Google is building a new chatbot called Magi, according to the New York Times. The race continues.

Rethinking AI

It seems to me that we need to change the way we think about AI in order to understand it and apply it better. Here are four general principles that I think we ought to heed.

The first principle is that artificial intelligence is mis-named. The word “artificial” suggests that we can create human-like intelligence outside the human brain. Yet what we are in fact doing is simulating human intelligence across thus far a relatively narrow range of activities. The ability of the machine to access and assess information across the internet exceeds that of any human by an exponential function; but the way in which it manipulates that data is not necessarily superhuman. To the extent that the Turing test has been passed, the machine can fool us that there is a human-like intelligence at work, when in fact it has managed to simulate human intelligence. It would therefore be more helpful to talk about simulated intelligence.

The second principle is that we are guilty of anthropomorphising machine intelligence. If the machine takes a few seconds to respond, it is tempting to say that the machine is ‘thinking’ about the problem, when in fact there are no equivalent mental processes which correspond to what we mean by ‘thinking’. The machine is incapable of any task that requires experiential knowledge. We can teach it to recognise faces by means of facial-recognition software but that does not mean that it sees the faces presented to it. Likewise, we could probably develop sensors that enable the machine to discriminate between sweet and sour foods – but that does not mean that it can taste food. The machine is neither sentient nor is it self-aware, as humans and other animals are.

The third principle is that we should avoid the suggestion that the machine has a mind. The word “mind” is highly contentious in philosophy and is often difficult to translate – for example the French word esprit has other connotations in addition to those associated with thinking. John Locke (1632-1704) devotes a long section of his An Essay Concerning Human Understanding (1689) to the problem of whether clever parrots have minds. He concludes that they do not, while rejecting Descartes’ dreadful notion that animals are automata. A chatbot no more has a mind than does an abacus or a slide rule – it is just a tool, albeit an incredibly sophisticated one.

It follows from the above – the fourth principle – that we cannot attribute intention to a machine and that therefore, it is incapable of being a bad actor. That is not to say that it might not make some very bad decisions with deleterious consequences. But the idea that a conspiracy of machines might decide to usurp the human race supposes that the machine can assess its own self-interest, which presupposes that it has a mind.

As I result, I think we should stop panicking about the rise of AI (sorry, simulated intelligence) and focus instead on how best we can use this extraordinary tool for our benefit.

Consequences

One application of AI that I have been thinking about – and I’m sure others think the same – is in health care. By using AI, we could monitor the health of the nation much more precisely than at present and even see a pandemic coming months, or even years, in advance. There is already evidence that we can monitor the spread of flu by recording how often people Google its symptoms. A cure for cancer, as for other degenerative diseases, is now foreseeable – possibly by answering that imponderable question that bugs us all from time to time: why do some people get cancer and others don’t?

Of course, there will be issues around privacy and potential surveillance which we need to debate. But personal privacy is already a major contention in the internet age. The fact is that we don’t actually value it as much as we thought we did. Even our cars are spying on us these days. My 21-year old godson can only afford his car insurance by agreeing to have a black box installed which tracks not only his route but, quite literally, his every gear change. And, according to a Reuters investigation, Tesla owners have been watched in the buff by Tesla employees using in-car cameras. You would have thought that they had better things to do.

On a less sanguine note, I cannot foresee that there will ever be a consensus on how to regulate AI. China is never going to agree with the US and the EU on how to restrain it: it is already using AI in the enforcement of its Social Credit System. As for the EU, it seems to be on the verge of emasculating AI altogether – which is reminiscent of the early days of the motor car when a man waving a red flag was obliged to walk in front of every horseless carriage.

Moreover, the idea that AI will cause mass unemployment as more and more low-skilled jobs are automated, from fruit picking to lorry driving, now looks questionable. Throughout much of the developed world there are acute labour shortages with exceptional levels of unfilled vacancies – even as our high streets are largely boarded up. I very much doubt that the manual dexterity of a robot will outperform that of a human electrician in my lifetime.

Actually, we have been living with robotisation (AI lite) for some time – consider how your supermarket checkout experience has evolved. Human beings tend to catastrophise change. We shall manage the rise of simulated intelligence more smoothly than the catastrophists fear.

Afterword

For those of you who attended the Master Investor Show last Saturday – it was great to see you there. For those of you who didn’t – you missed a trick: but look out for upcoming videos on the website. As you know, I don’t like hype. It suffices to say that the feedback was gratifying.

Listed companies cited in this article which merit analysis:

  • Microsoft (NASDAQ:MSFT)
  • Alphabet (NASDAQ:GOOGL) (parent of Google)
  • Amazon (NASDAQ:AMZN)
  • Alibaba Group Holding Ltd. (HKG:9988)
  • Baidu Inc. (HKG: 9888)

Comments (3)

  • Graham says:

    Very interesting. But i can’t help but be concerned for 2 reasons:

    1. We don’t fully understand how it works
    2. We can’t turn it off.

    This doesn’t apply to anything that has gone before.

  • Kudsy Dave says:

    Open the pod bay doors, HAL.

    HAL, do you read me?

    HAL?

  • Steve d says:

    It is great to read a piece about AI that I (mostly) agree with.
    An extreme idiot savant, as you say. Your first principal, the use of the phrase “artificial” is looking through the wrong end of the telescope IMHO, the “intelligence” bit is the problem, and that leads directly to your second principal. Your third I couldn’t agree with more, the first electronic calculators could do math quicker than any human, but we never thought they had a mind!
    Lastly, unfortunately AI in and of itself cannot be malicious, (no mind) I’m sure a trait could be programmed in at the base level, a political or ideological bias is the most obvious use. In the future, I can see armies of human fact checkers proving or refuting the stuff these algorithms spew out. Gives people jobs I guess!

Leave a Reply

Your email address will not be published. Required fields are marked *