Schrödinger’s Cat is out of the bag
On 04 August I authored an article on these pages entitled Schrödinger’s Cat is SO Clever – about Quantum Computing (QC). Little did I know that that was the very day that a secret group of Google’s scientists chose to publish an academic paper for peer review which suggests that Quantum computing, contrary to what I had supposed, is just around the corner. If that is not extraordinary enough, consider that in the same month Japanese scientists started to improve health outcomes for cancer patients by using the healthcare data mining techniques pioneered by Google in the UK. I discussed this in an article in June. Love it or loathe it, Google is on course to be remembered (no doubt by far-distant robots) as the most innovative corporation in the history of mankind. The world is about to be transformed by a unique bunch of geeks with limitless ambition. Scary? Yes, but…
I thought you might like to know…
An academic paper was published on 04 August by a posse of secretive computer über-boffins who work for Alphabet/Google (NASDAQ:GGL) entitled Characterizing Quantum Supremacy in Near-Term Devices[i]. And it’s not exactly bedtime reading. In fact, it is a mathematical argument of such mind-boggling technical sophistication that I am, dear reader, utterly incapable of evaluating it. What I can tell you is that the über-boffins claim to have proven (though there may well be counter-boffins who step forward to oppose them) that Quantum Supremacy is a logical certainty.
Quantum Supremacy? Don’t panic: I had never heard the term before last week, either. I understand that it means that, for a certain theoretical size of quantum computer, a set of mathematical problems called optimisation problems (which normally have to be solved subject to specific constraints) that could never be solved by classical (that is non-quantum) computers, could be solved by QC in a matter of… hours.
As opposed to 900 million years…
Some mathematical problems are highly problematic
An optimisation problem might be: What is the quickest route to drive from MI’s offices in Notting Hill, London to Morningside, Edinburgh? And we might impose some constraints like (a) minimising fuel consumption stroke carbon emissions; (b) avoiding traffic jams; (c) avoiding toll roads; and/or (d) stopping for a spot of lunch on the way at Le Manoir aux Quat’ Saisons, Oxfordshire (and catching up with dear old Raymond)…
In fact, a good satnav (GPS, if you are American or French) could crack this little nut in a few seconds by going through multiple iterations of all possible routes that fit the bill. It would take a human being with quill and paper several years; but a tiny classical microchip can take this in its stride. That is because the number of iterations is – well, just a few million. No problem. But as the number of iterations rises towards infinity, even the cleverest classical microchips run into capacity problems.
Constrained optimisation problems which might entail a near-infinity of iterations would include a number of theoretical mathematical problems which are called conjectures. That means that mathematicians suppose or conject that a particular theorem will work in a particular way universally; but they are unable to prove that the outcome from the theorem is always correct. An example of this is the Collatz conjecture or “3n+1 problem”, first advanced in 1937.
Fermat’s Last Theorem – advanced back in 1637 – was one such conjecture which is relatively well known. (Basically, that if x2 + y2 = z2, then x3 + y3 can never be equal to z3)[ii]. (Never mind). Yet this conjecture was conclusively verified by a human brain: namely that of the British Cambridge-based mathematician Sir Andrew Wiles in 1995. However, there are many mathematical conjectures out there which probably can never realistically be verified by humans – even by geniuses like Wiles backed up by conventional computers.
I’m not going to re-hash here the technical intricacies of last month’s article. Suffice to say that at the algorithmic level today’s computers still operate on classical Boolean logic using binary bits that are EITHER ON OR OFF. (Named after the 19th century mathematician George Boole, Boolean logic is a form of algebra in which all values are reduced to either TRUE or FALSE). Quantum computing, on the other hand, is the design of hardware and software that replaces Boolean logic by quantum mechanics at the algorithmic level, using bits that may be on or off – or both or neither…
Even in QC, size matters
So exactly how powerful would a quantum computer have to be to outclass a classical computer (even one like Marvin the glum robot in Douglas Adams’ Hitchhikers Guide to the Galaxy with a brain the size of a planet)? Well, the über-boffins have now (apparently) mathematically proven that the QC machine needs to be 50 qubits worth of kit. A qubit – you will no doubt remember from my earlier article – is a quantum computer bit.
(But multiplied by 10 to the power of n. Where n is the number of follicles in an über-boffin’s beard. I’m sorry, but the available literature is extremely confusing on this: the manufacturer of Google’s existing computer claims to have produced a 1,000-qubit machine[iii] – as you will see if you visit their website.)
Anyway, I reported last month that in 2013 Google teamed up with NASA to acquire the latest machine from the mysterious D-Wave Systems Inc. (Canada) which is the only private manufacturer (so far as we know) of quantum computers (though whether these really are quantum computers has been questioned). Clearly, these machines are not off-the-shelf; rather they are made-to-order. This machine has been tested out over the last year in a giant freezer (it can only work at a temperature just above absolute zero) in Mountain View, California, just down the road from Google’s HQ.
Now it appears that things have moved on. With this academic paper Google has signalled its intention to build a quantum computer capable of performing a task that no classical computer could solve. They want to prove Quantum Supremacy beyond doubt. According to New Scientist[iv] they have decided to acquire a 50-qubit quantum computer which could be operational as early as the end of 2017. This new machine will then be pitched into gladiatorial computer combat with Edison, one of the most advanced supercomputers in the world based at the US National Energy Research Scientific Computing Centre (NERSC)[v] in Berkeley, California.
In 2014 Google hired Dr John Martinis from the University of California at Santa Barbara to design its own superconducting qubits. His is one of the names on the academic paper mentioned. This suggests that Google is planning to develop its own machine rather than relying on D-Wave Systems’ products. It seems that QC has gone from the theoretical to the engineering stage, with Google at the forefront.
What is Google really up to?
On their fascinating Research at Google webpage they state: We are particularly interested in applying quantum computing to artificial intelligence and machine learning[vi]. So this is not just about proving mathematical conjectures.
Healthcare could be the first beneficiary of QC
One application of machine learning is certainly data mining in healthcare. I wrote a piece on this back in June. In the spring of this year Google’s AI outfit DeepMind Technologies signed a contract with the UK National Health Service (NHS). The agreement gave Google/DeepMind access to the anonymised records of 1.6 million NHS patients who have used three famous London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and Royal Free. Apparently, this database includes millions of retinal scans – an under-researched field in medicine. The aim is to spot incipient eye disease more quickly than an ophthalmologist could. I speculated that Google is trying to generate algorithms to relate medical histories to medical outcomes. This would have potentially huge implications for diagnostics and treatment, not least in cancer care.
It seems that Japanese medics are already using AI to diagnose rare forms of cancer at the University of Tokyo Hospital. This is a milestone. Medical errors by human doctors are thought to be the third most important cause of death in the USA – and a significant cause of that unfortunate statistic is misdiagnosis. And, to be fair to doctors, so much new medical data is being generated across the world right now that no individual medic can realistically hope to keep abreast of it all. Machines will do a better job – but only if they can develop the right algorithms.
A team at Stanford University, California, recently developed an algorithm to scrutinise slides of cancerous lung tissue[vii]. They fed this into a computer which, in short order, was able to distinguish between tissue samples from people who succumbed quickly to the cancer and those who survived much longer.
Last month the Machine Learning in Health Care (MLHC)[viii] annual research meeting took place in Los Angeles. MLHC aims to bring together two usually insular disciplines: computer scientists with artificial intelligence, machine learning and big data expertise on the one hand, and clinicians and medical researchers on the other. DeepMind Health was one of the conference’s sponsors.
Riding for a fall?
Google’s global ascendance and extraordinary ability to harvest and deploy user-generated information has already made it enemies. There are voices saying that it should be emasculated or broken up. Or just effectively regulated – as utilities are. Others say that Google – like Facebook – is a natural monopoly. If you broke it up, the remaining pieces, in time, would just gravitate back together again because a search engine or social network only works if everyone uses it. And how do you regulate a business which charges its customers nothing?
At least Google is aware of the potential PR problem. DeepMind has appointed an independent review board to oversee the activities of its healthcare division. A lot of people think that they got access to precious health data too cheaply. (In which case why doesn’t the NHS set up a commercial arm to sell data at market rates, just as the BBC has a commercial arm to sell its programmes? The poor old NHS is behind the curve.)
Google could have sat on its laurels as the dominant internet search engine, the gateway to the internet. But vaulting technical ambition is in its corporate DNA. Its huge income stream from user-targeted advertising is being used to develop technologies which have the potential to change our lives – from QC to healthcare data mining to driverless cars.
I doubt if any government, even that of the mighty United States, could ever put Google back in its box. The more likely scenario is that Google’s influence over our lives will grow and that we shall continue to love what they do for us, even if we resent how much they know about us. Remember, if you don’t pay for it, you are the product.
My biggest concern is that the mathematics that Google’s new QC computers might develop will probably be incomprehensible even to the greatest mathematical minds like Andrew Wiles’. But then that’s really above my pay grade.
Google already knows your birthday. Very soon they will know your expected death-day. Google Life Assurance, anyone?
[i] The names on the paper are, in order: Sergio Boixo (Google California), Sergei V. Isakov (Google Switzerland), Vadim N. Smelyanskiy (Google California), Ryan Babbush (Google California), Nan Ding (Google California), Zhang Jiang (NASA), John M. Martinis (Google & University of California, Santa Barbara), and Hartmut Neven (Google California).
[ii] There are several popular books about Fermat’s Last Theorem. One of the best-selling was Simon Singh’s book of that name published by Fourth Estate in 1997.
[iv] Google plans quantum supremacy, by Jacob Aron, New Scientist, 03 September 2016, page 8.
[vii] Medicine by machine, by Aviva Rutkin, New Scientist, 03 September 2016, page 20.