Physics of the Future by Michio Kaku p.101
When we finally hit the fateful day when robots are smarter than us, not only will we no longer be the most intelligent being on earth, but our creations may make copies of themselves that are even smarter than they are. This army of self-replicating robots will then create endless future generations of robots, each one smarter than the previous one. Since robots can theoretically produce ever-smarter generations of robots in a very short period of time, eventually this process will explode exponentially, until they begin to devour the resources of this planet in their insatiable quest to become ever more intelligent.
This idea, carried to extreme, is called the ‘Singularity.’ Once Earth is consumed, robots will find a better, faster way to reach the stars and gradually consume them also.
What do you think? Will computers take over the universe some day? When? Soon? Will humanity be safe in their care?
Science fiction that accepts this idea tends to think mankind will be at a great disadvantage, that robots will eventually try to wipe us out. The Terminator franchise of films is based on the idea that a global computer network built to defend Earth becomes sentient and realises that humanity is a threat to its own existence.
The Matrix films franchise goes even further. Instead of wiping us out, the machines use us as their energy source and feed our minds with dreams of living in order to keep us productive (Note to fans: there’s another one coming in 2021!).
Or is the answer that it will never happen?
Isaac Azimov famously invented the Three Laws of Robotics:
- First Law
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
There could still be problems even with these laws built into every robot. Michio Kaku suggests that if humanity made self-destructive choices a benevolent robot might take over the government to prevent humanity harming itself. That was the plot of the film I, Robot, (based on Azimov stories). To prevent this, Azimov added another law to precede the others.
- Zeroth Law
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Michio Kaku points out that the evolution of robots will not come overnight. Humanity will not be taken off guard by robots suddenly becoming sentient. There will be time to create ‘friendly AI.’ This benevolence will be part of robotic design. This has given rise to a new field called ‘social robotics.’
I can recommend a book series about a benevolent sentient machine: Chronicles of Theren by C D Tavernor. The first book is called First of Their Kind.
So, masters or servants? Friends or enemies? What do you think?
Ann Marie Thomas is the author of four medieval history books, a surprisingly cheerful poetry collection about her 2010 stroke, and the science fiction series Flight of the Kestrel. Book one, Intruders, and book two Alien Secrets, are out now. Follow her at http://eepurl.com/bbOsyz