“There will be this big junction in human history when we transition from an area dominated by biological intelligence to an era dominated by machine intelligence. … This will be the last invention humans will need to make, because after that we will have this machine super intelligence that will be better at invention than we are.” – Nick Bostrom
It is Sunday and time for a lighter topic article again. Or… is this topic really light? The idea of artificial intelligence have occupied humanity for as long as we record history. Thinking machines and artificial beings appear already in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion’s Galatea. Now it is becoming reality also outside of fiction.
In Bearing we help our clients develop innovation and innovation systems, which in its turn depends on inventions or discoveries of some kind. Some science fiction movies I have watched recently have made me think about the limits of discovery, and what would be the ultimate invention, if not autonomously functioning and self learning artificial intelligence? This thought used to be science fiction, but today many scholars think it can become reality quite soon.
The current state of AI
As explained in an article in Wired last year, three recent breakthroughs have unleashed the long-awaited arrival of artificial intelligence. The three breakthroughs are:
Cheap parallel computation with parallel computation chips and parallel processing software.
Big Data and the cloud, with the incredible avalanche of collected data about our world, which provides the schooling that AIs need.
Better algorithms, including a new approach to design neutral networks and creation of deep-learning algorithms.
This perfect storm of parallel computation, bigger data, and deeper algorithms generated the 60-years-in-the-making overnight success of AI, and this convergence suggests that as long as these technological trends continue, and there is no reason to think they will not, AI will keep improving.
In the future, how will we tell if a robot has human-level intelligence? For decades, the litmus test of choice was the Turing Test, which asks: can a computer program fool one in three judges into thinking it’s human? But the Turing Test says nothing about a program’s ability to reason, or to be creative or aware. It’s essentially an exercise in deception, so scientists have started devising other metrics to measure artificial intelligence.
"Nothing is more usual and more natural for those, who pretend to discover anything new to the world in philosophy and the sciences, than to insinuate the praises of their own systems, by decrying all those, which have been advanced before them." -David Hume
Since Olaf Stapledon´s Last and First Men, Jules Verne´s The Master of the World and H.G. Wells The Time Machine, authors use science fiction to illustrate philosophy and philosophical dilemmas. From ethical quandaries to the very nature of existence, science fiction’s most famous texts are tailor-made for exploring philosophical ideas, and so are, to a rapidly increasing degree, motion pictures.
Recently, on an overnight flight from Nairobi to Zurich, I watched the smart movie Ex Machina. This new, British movie explores the real meaning of intelligence and consciousness, and in my view, not since Stanley Kubrick’s masterpiece 2001: A Space Odyssey has a film about AI been this good.
The Fictional Universe
The notion of artificial intelligence, whether on computer screens or in robot form, have long fascinated the makers of science-fiction and other recent movies about AI have been Transcendence, Moon and Antonio Banderas recent Automata, not to mention classics like Alien, Metropolis, Westworld, Blade Runner, The Terminator, The Matrix and The 13th Floor.
These movies are often dystrophic and depict artificial intelligence as a dangerous menace that threatens mankind. The underlying question is, why would a super-smart, self-aware artificial intelligence need us, humans, once we have developed it and it has become self-sufficient? We would merely be in its way and to it, we would be like ants and other insects are to us. We can set rules for the AI, but would it obey them, once it would learn how to enhance and modify itself?
The Dangers of Smart
Robotics are a set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behaviour of artificial intelligence designed to have a degree of autonomy. The best known set of laws are those proposed by Isaac Asimov in the 1940s, which state:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The main problem with any set of laws is that an artificial intelligence may be faced with ethical dilemmas in which any result will harm at least some humans, in order to avoid harming more humans. The classic example involves a robot who sees a runaway train which will kill ten humans trapped on the tracks, whose only choice is to switch the track the train is following so it only kills one human.
Some fiction use this dilemma as a license for the robots to try to conquer humanity for its own protection. However the dilemma may be bigger than this, as "humanity" is such an abstract concept that the artificial intelligence may not even know if they were harming it or not.
How do we even know how an artificial intelligence would qualify as "harm", if the restriction of such laws of robotics would simply prohibit physical harm, or if social harm is also forbidden? In this last case, conquering humanity in order to implement tyrannical controls to prevent physical harm between humans might nonetheless constitute a devastating social harm to humanity as a whole.
Beyond the Human Race
There are good reasons to have concerns in this direction, as well known thinkers like Bill Gates, Elon Musk and Stephen Hawking have pointed out, as they in recent times have made strong warnings that development of artificial intelligence could lead to the end of humanity, just as we in some dark historic past exterminated the Neanderthals.
In December 2014, Stephen Hawking told the BBC in an interview that "The development of full artificial intelligence could spell the end of the human race." In an interview after the launch of a new software system designed to help him communicate more easily, he said there were many benefits to new technology but also risks. His comments on AI start at 4,22 minutes into the video.
In November 2014, Elon Musk, the entrepreneur behind Space-X and Tesla, warned that the risk of “something seriously dangerous happening” as a result of machines with artificial intelligence, could be in as few as five years. Below is a video where Elon Musk expresses his concerns.
Maybe we can control the potential menace if we collectively act in a responsible way as we develop AI? In January this year, a group of scientists and entrepreneurs, including Erik Brynjolfsson, Elon Musk and Stephen Hawking, signed an open letter promising to ensure AI research benefits humanity.
The letter warns that without safeguards on intelligent machines, like the laws of Robotics mentioned above, mankind could be heading for a dark future. It highlights speech recognition, image analysis, driverless cars, translation and robot motion as having benefited from the research. The letter says “The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.”
However by the recent automation and technology development, we have not just been redefining what we mean by artificial intelligence, away from Isaac Asimov’s humanoid robots. We have been redefining what it means to be human.
Over the past 60 years, as mechanical processes have replicated behaviours and talents we thought were unique to humans, we have had to change our minds about what sets us apart. Just as the crisis of religion has reduced us to an animal among animals, AI development is reducing our self-image of humans as a uniquely intelligent species.
As we invent more advanced artificial intelligence, we will be forced to surrender more of what is supposedly unique about us humans. I think it is likely we will spend the upcoming decades in a permanent identity crisis, constantly asking ourselves to what purpose humans are for.
In the grandest irony of all, the greatest benefit of an everyday, utilitarian artificial intelligence that is smarter than us, will not be increased productivity or an economics of abundance or a new way of doing science, although all those will most likely happen.
The greatest benefit of the arrival of artificial intelligence will be that AIs will help to define humanity. Ultimately we will need AIs to tell us who we are.
But once we have created the super smart machines that can help us figure out the Universe and they start to produce answers, will we understand them? Will we with our five senses be capable of comprehending a world that may be multi-dimensional and potentially with parallel realities?
Just as 42 is the "Answer to the Ultimate Question of Life, the Universe, and Everything" in The Hitchhiker’s Guide to the Galaxy books, as calculated by the greatest computer of them all, I wonder will we understand the answers? Or will we react as the men in the fictional universe of Douglas Adams, who did not know what to do once they knew the answer was “42”, because nobody knew what the question was.
Extinction or Utopia?
To finish this article, I would like to introduce the video below. From a Royal Society conference in London, Nick Bostrom, of the Oxford Martin School at the University of Oxford, tells the FT’s Ravi Mattu a day will come when machines are more intelligent than humans, and that more must be done to address the potential risks of such a scenario. It is well worth watching, keeping in mind the reasoning on dangers of AI´s which I have introduced above.
So what will happen with the human race in the long run? Maybe it will be as Rutger Hauer´s Replicant says to Harrison Ford in his final words in Ridley Scott´s 1982 neo-noir dystopian science fiction film Blade Runner. The question is though, is Hauer talking on behalf of the Robot or the Human, or on behalf of a collective concern they both may share?