May 30, 2018

- BY

Allan Waddell

TOO MUCH STUPIDITY FOR RULE BY AI

Elon Musk can’t be wrong about artificial intelligence, can he? Well, actually, yes he is wrong.

Elon Musk can’t be wrong about artificial intelligence, can he? Well, actually, yes he is wrong.

The idea behind the existential threat of AI is alternatively called the “singularity” or the “intelligence explosion” and it’s gaining some serious momentum. There are more transistors today than there are leaves on all the trees in the world, more than four billion alone in the smartphone you carry. Is digital eclipsing the “real” world?

A little history about this idea of our impending extinction at the hands of our silicon creations. Princeton’s flamboyant and brilliant John von Neumann coined the term “singularity” in terms of AI in the 1950s. The term “intelligence explosion” might have come from mathematician I.J. Good in 1965. In Advances in Computers, vol. 6, Good wrote of an ultra-intelligent machine that could design even better machines, creating an “intelligence explosion” that would leave the intelligence of man far behind. He warned, “the first ultra-intelligent machine is the last invention that man need ever make”.

Ray Kurzweil, a futurist and inventor who pops an endless regime of vitamins so he will stay alive long enough to upload himself into the cloud and is also director of engineering at Google, says human-level machine intelligence might be just over 10 years away. He predicts 2029 for the year an AI passes the Turing test and marked 2045 as the date for the singularity. Computer scientist Jurgen Schmidhuber, whom some call the “father of artificial intelligence”, thinks that in 50 years there will be a computer as smart as all humans combined. And Musk, technology ringmaster extraordinaire of PayPal, SpaceX, Tesla and, recently, OpenAI, while not quite as prone to specific-date-prophesying, calls artificial intelligence a “fundamental existential risk for human civilisation”.

The argument made by Musk and other singularity sirens is pretty simple. It’s all about an exponential growth curve. This curve shows that it takes machines a long time to reach 100 per cent human intelligence, about 100 years, from 1940 to 2040. However, once human intelligence is reached, it takes a very short time to reach 400 per cent human intelligence. The argument goes, essentially, as laid out by I.J. Good: when computers improve themselves, they’ll get so smart so fast nobody will know what hit them.

Approaching 2020, it’s not that crazy to call computers half as smart as people. They can perform pattern and sound recognition tasks, and play games, and perform billions of maths problems per second. Looking back towards the 1940s, when computers were very specialised and quite slow, it’s fair to call them mechanical calculators with no intelligence.

And with the explosive rate of technological development in the past century compared to the centuries before, it does seem like we’re going nowhere but up.

Reassuringly, many people who work in artificial intelligence just don’t see this kind of AI explosion happening in the next 30-some years, if at all. As computer scientist Andrew Ng of Baidu puts it: “There could be a race of killer robots in the far future, but I don’t worry on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.”

Or, put another way by Toby Walsh, AI professor at the University of NSW, “stop worrying about World War III and start worrying about what Tesla’s autonomous cars will do to the livelihood of taxi drivers”.

There are really good reasons for this scepticism, and it’s based in engineering. For starters, it looks like Moore’s Law has truly and finally died. This is important. Moore’s Law says essentially that processor speeds, or overall processing power for computers, will double every two years. While there’s been some residual debate, and while companies are still shrinking components, the rate has not been able to keep up. Due to some pernicious engineering limits of producing transistors below 10-nanometers, it’s become much harder for companies to fit more transistors on chips, and the progress curve has started to level off. Whether the limit reached is fundamental, it demonstrates that much more work will be required to continue making improvements. Speculations about the near-infinite power of AI often rely on the assumption that our processing power will continue to rapidly increase. If, in fact, Moore’s Law has been laid to rest, that assumption might be faulty.

Another assumption worth examining is that size and processing power of neural networks leads to greater intelligence at all. It turns out that this assumption isn’t really true. As Bernard Molyneux, cognitive science professor at UC Davis, put it, “if a neural network is too large for a data set, it learns the data by rote instead of making the generalisations which allow it to make predictions”. The real trick in making an intelligent AI is forming the right relationship between computational power and data. If there is some golden ratio which leads to an intelligence explosion, there’s absolutely no guarantee we could ever find it analytically, and the sheer number of ratios possible would vote against stumbling on it by luck.

The point is that AI requires improvements in areas which are historically confounding, like the link between the brain and intelligence. Progress in areas with historical growth (such as processing power and neural network size) do not guarantee ultra-intelligence at all.

Betting against a catastrophic end to the world has been right so far. On the other hand are Musk, Kurzweil et al and they have some impressive street cred — after all, Kurzweil rightly predicted the successful timeline of the human genome project and not many would bet against what Musk’s engineering teams achieve.

I work in technology transformation and our biggest daily challenge isn’t system brilliance; it’s legacy stupidity. The digital world is still full of weeds.

I can’t overlook what one member of my team said recently when I was going on about AI again: “C’mon, have you ever met a Chatbot that didn’t fundamentally suck?”

It’s true, I haven’t.

Allan Waddell is founder and CEO of Australian enterprise IT specialist Kablamo.

Read the article on The Australian site here.

TAGS

AI

Ethics

DISCOVER MORE