Expert Comment: Professor Hawking’s Second SingularityMonday 8 December 2014
In the light of Professor Stephen Hawking's comments on the future of Artificial Intelligence (AI), Senior Lecturer in Computer Science Dr David Reid looks at what may lie in store for humanity.
There are two types of singularity in science; one is gravitational, the other is technological. Stephen Hawking last week took time out from researching the former in order to comment on the latter. The simplest definition of the technological singularity is the point where Artificial Intelligence (AI) exceeds human intellectual capacity and control.
No-one knows what will happen at this point; many suppose, including Prof Hawking, that this event will radically change or even end civilisation.
Since Alan Turing wrote his seminal paper 'Computing Machinery and Intelligence' in 1950 the debate about whether computers can think as we do has raged for decades. The battle between philosophers, neurologists, computer scientists and mathematicians has swung to and fro since that point. AI has come in and out of fashion. In the 1960s and 1970s many outlandish claims where made about AI. When these didn’t materialise AI was taken by many as being at a dead end. As a result in the 1980s and 1990s many universities’ Computer Science courses quietly dropped AI from the curriculum. It was seen as an interesting academic pastime but too esoteric, too difficult or even essentially irrelevant.
During this time those outside the AI community often expressed a deep seated, almost visceral, repulsion or disgust about the idea that a machine could encroach on what it is to be human; that only humans can, or should, possess intellect, that thinking and humanity are one and the same.
Recently advances in AI have changed this argument; that is new ideas, algorithms and computer architectures have made it possible for computers to act ‘intelligently’ in relatively narrow fields. For instance:
- IBM’s Watson computer is capable of answering questions posed in natural language. In 2011 it went up against former winners of the quiz show Jeopardy! beating all contestants to win the $1 million first prize.
- Deep Blue versus Garry Kasparov was a pair of famous six-game human–computer chess matches. The matches were played between the IBM supercomputer Deep Blue The first match was played in February 1996 in Philadelphia, Pennsylvania. Kasparov won the match 4–2, losing one game, drawing in two and winning three. A rematch took place in 1997, with Deep Blue winning 3½–2½.
- BigDog is a rough-terrain robot that walks, runs, climbs and carries heavy loads. BigDog is powered by an engine that drives a hydraulic actuation system and has four legs that are articulated like an animal’s. The AI in BigDog's on-board computer controls locomotion, processes sensors and handles communications with the user. BigDog’s control system keeps it balanced, manages locomotion on a wide variety of terrains and navigates.
- The Google Self-Driving Car is a project by Google that involves developing technology for autonomous cars, mainly electric cars. Based on the winning entry of the 2005 DARPA Grand Challenge and its $2 million prize from the United States Department of Defence it uses advanced AI techniques to make predictions about its immediately environment; taking into account: other cars driving behaviour, weather conditions, pedestrians and cyclist (including hand signals). It now operates in four states in the USA and has clocked up 700,000 autonomous miles.
These advances in have had a polarising effect on the debate: on the one side the singularity is inevitable, as Prof Hawking states “Humans limited by slow biological evolution cannot compete and will be superseded’, on the other side the above achievements in narrow fields are not generalisable into other areas. In in other words it is impossible to take an AI computer constructed for one problem domain and for it to autonomously apply itself to a different problem domain.
This is the essence of the debate between ‘weak’ (or ‘narrow’) and ‘strong’ AI. The weak AI hypothesis states that a machine running a program is at most only capable of simulating real human behaviour and consciousness (or understanding, if you prefer) in a relatively restricted domain. Strong AI, on the other hand, claims that the correctly written program running on a machine actually is a mind -- that is, there is no essential difference between a (yet to be written) piece of software exactly emulating the actions of the brain, and the actions of a human being, including their understanding and consciousness.
The ‘Holy Grail’ of (Strong) AI is an Artificial General Intelligence (AGI) machine that could successfully perform any intellectual task that a human being can.
Whereas in the past AI researchers such as like Hubert Dreyfus and Roger Penrose denied the possibility of being able to achieve AGI there now seems to be a common census that AGI is not only possible but will come fairly soon. If you average all of the predictions made the year of the singularity is 2045. Professor Kurzweil agrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isn’t just a progression of steadily increasing capability, but is in fact exponentially accelerating—what Professor Kurzweil calls the “Law of Accelerating Returns.” He writes that:
“So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity … “
Others disagree (such as Dr Paul Allen) and say the singularity is more like 1,000 years off.
The brain is a really, really, …(and I mean really!) complex structure. There are on average 86 billion neurons in the brain, and has 10,000 times as many connections (about 125 trillion synapses). The current largest chips in a laptop or desktop have two billion transistors. Some experimental chips such as IBM second generation SyNAPTIC chip has 5.4 billion transistors, and FPGA devices have 10 billion transistors. These are a long, long, way off trillions of connections.
This is a really new field and we don’t know how to programme consciousness yet; the difficulty of building human-level software goes deeper than computationally modelling the structural connections and biology of each of our neurons. ‘Brain duplication’ strategies like these presuppose that there is no fundamental issue in getting to human cognition other than having sufficient computer power and neuron structure maps to do the simulation. This strategy has had limited success in practice, because it doesn’t address everything that is actually needed to build the software. We also need to know how everything functions together.
In neuroscience, there is a parallel situation. Hundreds of attempts have been made to chain together simulations of different neurons along with their chemical environment. The uniform result of these attempts is that in order to create an adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of knowledge about the functional role that these neurons play, how their connection patterns evolve, how they are structured into groups to turn raw stimuli into information, and how neural information processing ultimately affects an organism’s behaviour. Brain simulation projects underway today model only a small fraction of what neurons do and lack the detail to fully simulate what occurs in a brain. Efforts such as the Human Brain Project in Europe and the Brain Imitative in the USA are trying to address this.
Whether or not you think the singularity will come in 20, 30 or 100 years or even ever, I believe there is another fundamental problem with AI that isn’t as well publicised. That is, there is lots of opinion about AI and very little knowledge about what it actually involves. In our department we teach AI at Masters’ level, and all of the staff have written software that has some elements of AI embedded in it. Several staff members have written evolutionary algorithms and others have written neural network based solutions to specific problems, some of us have even experimented in evolutionary hardware. It is a different way of doing Computer Science where the results are often not known by the programmer. My colleagues and I have produced code that throws up unexpected results that show elements of that most human of things: creativity.
The problem I have is this: on the one hand everyone agrees that AI is going to affect us more and more in the future (the Flash Crash in 2010 is one such example), yet AI is hardly taught in universities and is not taught at all in schools (it is barely mentioned in the new Computer Science curriculum). Yet the people it will affect the most, our children, have almost no knowledge of the fundamental workings of AI.
In 2012 Cambridge University created The Cambridge Project for Existential Risk. The stated aim is to establish within the University a multidisciplinary research centre, the Centre for the Study of Existential Risk, dedicated to the scientific study and mitigation of existential risks of this kind. Since then, along with super-volcano and meteorite strike, AI has been consistently number one or two in the doomsday chart. Yet we are not equipping the next generation with the skills, knowledge or ability to make informed decisions about the ramifications of these new technologies. If AI is an “unknowable threat” or and “existential risk” then surely we have a duty to equip the next generation with sufficient knowledge so that they can make informed decisions about what is “good” AI and what is “bad” AI.
One thing is certain the technologies and nature of AI if/when it comes with be significantly different to the technologies that we imagine today. Our next generation should be intellectually fortified enough to adapt to any circumstance, good or bad, just in case.
More information on studying Computing at Liverpool Hope is available from the course pages.