The AI Drama that Intrigued the World

What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?”

Concluding sentence of Homo Deus (2016) by Yuval Noah Harari (b.1976)

Open Confusion

It could have been a drama written by a chatbot – strange, unexpected and with a curious, even clunky conclusion. And yet it was devised by humans, and it really happened.

On Friday, 17 November the board of OpenAI – the stable of boffins who developed the formidable AI chatbot ChatGPT, launched in November last year and with which we have all become familiar this year – fired the company’s co-founder and CEO, Sam Altman.

In case you have managed to get through life without having heard of him, Samuel Harris Altman, 38 years old, a man who dropped out of Stanford University to launch a social network when he was just 19, is considered the Wolfgang Amadeus Mozart of machine learning. He has been described as “the face of AI”. Time magazine earlier this year nominated him as one of the 100 most influential people on the planet.

Altman founded OpenAI in 2015, backed by investment from, amongst other luminaries, Elon Musk and Peter Thiel, as well as from corporate backers such as Microsoft, Amazon Web Services and Infosys. Since then, Altman has built OpenAI into the leading powerhouse of AI. Only Google comes close to giving ChatGPT a run for its money with its Bard AI tool, which it describes as “an experiment”.

The sole reason OpenAI’s board gave for sacking Altman was that he had not been “consistently candid” in his representations to them – obfuscatory language that sent millions of geeks worldwide into a lather of speculation as to what was really going on.

Altman was known to have divided senior management at OpenAI into two opposing tribes: the “boomers” and the “doomers”.

The boomers believe that generative AI has the capacity to solve intractable problems and will start to reap benefits for humanity in curing diseases and arresting climate change relatively soon. They have their feet hard on the accelerator.

The doomers believe that generative AI is inherently dangerous for the future of humanity and should not be developed without rigorous safeguards (or, to use the lingo, “guardrails”). These take the form not only of governmental regulation (which was the subject of the recent Bletchley Park AI Safety Summit hosted by the UK government, at which Altman was present) but also of internal constraints through effective governance. Their feet are poised above on the brake.

Altman categorises himself as a boomer. The rumour that circulated was that Altman and the boomers had developed some radical breakthrough in AI technology which they wanted to commercialise straight away – and the board took fright.

Over the ensuing weekend, according to the Financial Times, 747 of OpenAI’s 770 staff – so virtually all of its software engineers – signed a letter effectively threatening to walk unless Altman were immediately reinstated. On the Sunday, Microsoft CEO Satya Nadella let it be known that Altman would get a senior position at Microsoft. For a moment it looked like Microsoft was about to acquire all of OpenAI’s know-how and resources – which based on a planned employee stock distribution could be worth $86 billion – for nothing. ChatGPT has built up about 100 million weekly users in its first year and is already embedded in Microsoft’s core products – the Windows operating system, Office, the global must-have suite of applications, and its internet browser, Edge.

In the event, in a dramatic volte face on the following Wednesday, the board agreed to take Altman back as CEO (though without a seat on the board) and to reform itself into something resembling the board of a for-profit corporation. Two board members departed, and new ones were appointed: ex-Treasury Secretary and eminent economist Larry Summers and former Twitter chair Bret Taylor. It might be of interest to some that two departing women, Helen Toner and Tasha McCauley (both of whom have links to the Effective Altruism movement made famous by the now disgraced founder of the FTX cryptocurrency exchange, Sam Bankman-Fried) were replaced by two men.

It has been widely inferred that this turnaround was brokered under pressure from Microsoft, though its shareholders might have preferred if the company had just taken on the entire OpenAI staff. On the other hand, Microsoft and OpenAI have very different corporate cultures and such a move might have attracted regulatory scrutiny. Microsoft can profit from OpenAI’s technology without owning it outright. Moreover, it would be strategically inept for Microsoft to become entirely dependent on OpenAI’s technology – there are other outfits out there which might, in time, offer superior products. On a visit to London on Thursday (30 November) to announce $2.5bn of new investment in data centres in the UK, Microsoft President Brad Smith said he did not foresee a future where Microsoft takes control of OpenAI.

In order to understand the role of the board at OpenAI a little background is required. Back in 2015 OpenAI’s founders established it as a not-for-profit entity, the purpose of which was “to advance digital intelligence in a way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. Originally, the entity was funded by $100 million of seed capital provided by Elon Musk. When Musk tried to take control of OpenAI in 2018 and was rebuffed he decided to pull out of the venture and has since set up his own AI powerhouse, xAI, which unveiled its initial product, Grok, earlier this month.

Thereafter, OpenAI needed to raise new capital. In order to attract new investors, Altman set up a subsidiary company called OpenAI Global LLC on a for-profit basis. This is the entity that Microsoft has reportedly invested in, or plans to invest, up to $13bn, since 2019. But this new entity continued to be supervised by the board of the not-for-profit OpenAI.

This structure has turned out to be inherently unstable. Moreover, it is suggested that the board members did not have appropriate experience or technical skills to discharge their roles. Investors will only invest in businesses which, at some future point, are likely to generate profits. That means commercialising products that people are prepared to pay for. Independent oversight should be the job of government; and indeed, the US, the EU and the UK are working on this. Just as the Bletchley Park summit was unfolding, the White House issued an executive order on AI. Ultimately, government regulation will involve the creation of new institutions – but whether they will be effective is something I would like to consider another time.

OpenAI was established specifically to develop machine learning which could, in time, simulate or even surpass human intelligence (so-called AGI, or artificial general intelligence). The software was supposed to be “open source”, hence the name – that is, available to all – but since Microsoft started to invest in OpenAI Global that is no longer the case.

This enterprise was always going to raise ethical, social and even political challenges. Even its most optimistic proponents, like Altman, understand that there is a risk that an intelligent machine might not act in the best interest of humans. But developers need to generate revenue in order to secure future investment and are therefore under pressure to commercialise the technology before it has been adequately tested. Therein lies the dilemma.

Onwards And Upwards

Despite the board room drama, it was confirmed on Tuesday (28 November) that OpenAI will continue with a public share offering imminently. Existing investors such as venture capital firms Thrive Capital, Sequoia Capital, and Khosla Ventures are sticking with the $86 billion valuation.

That market capitalisation would place OpenAI amongst the most valuable tech companies in the world, ahead of payments platform Stripe ($50 billion), and Fortnite creator Epic Games ($32 billion – owned by Tencent); though it would fall well short of the TikTok parent ByteDance ($223.5 billion) and SpaceX, which is expected to command a $150 billion valuation in the near future. (By the way, that will make Elon Musk far and away the world’s richest man). But a note of caution here: Bloomberg columnist Matt Levine thinks that as long as Open AI Global is supervised by the parent company’s not-for-profit board it will remain a risky investment.

There is a lot going on in this space. On the very day that Sam Altman was fired, the French telecoms and cloud computing billionaire Xavier Niel launched a French version of OpenAI called Kyutai. This is also a not-for-profit research laboratory which will develop large language models. The venture is backed by French billionaire Rodolphe Saadé, chairman of the CMA CGM shipping line, and the former Google CEO Eric Schmidt. Kyutai joins another French startup in the field, Mistral AI, which is also backed by Niel and Saadé. The idea is that Europe needs its own AI hub and that it should be based in Paris. This, its backers think, is the only way to challenge the crushing dominance of US Big Tech.

Kyutai will be led by six specialists who have worked at DeepMind, Meta and Microsoft. It will be overseen by Yann LeCun, the French computer scientist who is head of AI at Meta, Professor Bernhard Schölkopf of the Max Planck Institute for Intelligent Systems and Professor Yejin Choi of the University of Washington.

Then there is Anthropic. Its founders quit OpenAI in 2020 over the firm’s commitment to AI safety. Anthropic garnered investment from Google and Amazon and from Sam Bankman Fried but its board members are appointed by an independent trust.

Microsoft had to team up with OpenAI because it was losing the race with Google in the development of large language models. But it is nevertheless well placed to win the race for small language models. Last month, the company announced the release of Orca 2. This is billed as a language model that demonstrates strong reasoning abilities by imitating the step-by-step reasoning methodology of large language models.

Conundrums

Perhaps the biggest problem with AI stroke AGI is that we don’t really know what it is. What exactly are we (or rather, they) trying to achieve? Are we trying to replicate the human mind or to enhance our understanding? Does anyone really want a “superintelligence”? Does anybody really want to live in a society where, as Elon Musk told Rishi Sunak at the Bletchley Park summit, all jobs were performed by intelligent machines – even if that were possible? I doubt it.

One of the board members who resigned at OpenAI has said that the enterprise should be closed down entirely as the risks are unquantifiable – better to be safe than sorry. The counter-argument to that, even if you are a doomster, is that if we shy away from the challenge the Chinese and others will continue to develop chatbots that might be entirely unregulated.

The venture capitalist Mark Andreessen, who is one of Silicon Valley’s most vocal opponents of governmental regulation of AI, has described the people who warn of the existential risks of AI as “a cult”. Yann LeCun recently said that it was “preposterous” to believe that AI could threaten humanity.

And it is true that despite the hype and hysteria, AI in the form of ChatGPT and Bard is currently just an essay writing machine – basically a browser with a language model attached which can formulate an argument. It is impressive to the extent that it can produce text within seconds that seemingly could have been written by an intelligent, highly educated human. But it does not “think” and does not have “opinions”. The output it provides, in my experience, varies according to how one frames the question. There is no way that it is self-conscious, and it is still debated whether machine consciousness is even theoretically possible. I have discussed here before why there are strong reasons to believe that consciousness can only arise within sentient organisms with nervous systems and high-level brains.

That is not to underestimate the transformational and disruptive impact of this technology. The NHS is already using AI to analyse patient records to determine which are most at risk of hospital admission this winter. And even in their first year, ChatGPT and Bard have already posed problems for educators. Clearly, it is not acceptable for a student to type in a question and then submit the output from a chatbot as an original essay for their course work. But should schools and universities ban the use of chatbots – indeed could they? And how are educators to know if a course work submission has been largely written by a machine or by a diligent student?

Come to think of it, how do you know this article wasn’t written by a chatbot? If I made you smile at any point, perhaps you can be confident it was written by me, since chatbots don’t have a sense of humour (yet). But how can you be sure?

Afterword – Losing Our Marbles?

Rishi Sunak’s snub to Greek prime minister Kyriakos Mitsotakis on Tuesday – he cancelled a scheduled meeting with him on his visit to London – was lamentable.

Greece is a player in European politics at a critical moment when we need to improve our relationship with the EU after the bruising experience of Brexit. It is a strategically vital member of NATO and is on the front line of the migrant crisis. It is also a country that many of us love and which we regard as the cradle of European civilisation. The snub was cack-handed.

It is not unreasonable for a leader to want to discuss the return of artefacts which he considers priceless to his national heritage. Athens now has a dedicated museum at the Acropolis where the Parthenon Sculptures (aka the Elgin Marbles) could be displayed in their historic context. In contrast, Rooms 18a and 18b in the British Museum where the Sculptures are currently displayed, is soulless and could easily be re-purposed. They would be better appreciated by visitors in their original home. The idea that the Sculptures could be “loaned” to the Greeks is impractical. And in an age of digital technology, the physical location of historical artefacts is less important than their being thoroughly catalogued and photographed online.

We should amend the 1983 National Heritage Act – decisions about their collections should be made by the museums that curate them, not by governments. So, assuming it was in the gift of the British Museum, here is the deal I would propose to the Greeks: they can have the Parthenon Sculptures back on three conditions.

First, they should avow that the marbles were acquired legally and legitimately by Lord Elgin over 1801-06. He paid good money for them (£30,000 was a fortune in those days) to the Turkish authorities who were then in charge. By removing them, they were saved from possible destruction – the Turks had been using the Acropolis as an armoury. The accusation that the marbles were stolen is a calumny. The British Museum has cared for them well since 1816. Second, all British passport holders should be given free entry to the Parthenon Museum in Athens in perpetuity. Third, the Greeks should pay for the restitution.

The argument that by giving the Parthenon Sculptures back we would open the floodgates to parallel future claims is a potent one. It would set a precedent. Most of the treasures of the British Museum, by their very nature, come from other parts of the world and, in some cases, their acquisition was of questionable legitimacy.

But nothing lasts forever. The British Museum – of which I am a Member – like all museums, is the custodian of objects of historic and cultural interest. Its scholars study those objects to explain the past – and that is good. That said, if those objects might be displayed elsewhere with what an economist would call greater utility – then so be it.

Listed companies cited in this article which merit analysis:

  • Microsoft (NASDAQ:MSFT)
  • Amazon (NASDAQ:AMZN)
  • Infosys (LON:OXSE)
  • Alphabet Inc. (Google) (NASDAQ:GOGL)
  • Meta Platforms Inc. (Facebook) (NASDAQ:META)
  • Tencent Holdings Ltd. (HKG:0700)
Victor Hill: Victor is a financial economist, consultant, trainer and writer, with extensive experience in commercial and investment banking and fund management. His career includes stints at JP Morgan, Argyll Investment Management and World Bank IFC.