From Shakespeare to Frankenstein to Jurassic Park, the overriding theme when it comes to tampering in nature’s domain is to not do it. In the real world, however, adapting, improving, refining, and harnessing nature have led to many of humanity’s greatest achievements. Examples include the first loincloths, agriculture, civilization, electricity, transportation, education, mass communication, GMOs, and vaccines.
When it comes to Artificial Intelligence, some think there is an existential risk that if safeguards are not put in place, AI could lead to human destruction or large-scale catastrophe. Even centuries before HAL, there were ominous premonitions about the harrowing fate that awaits those who chose an unchartered course.
But does this jibe with reality? Veteran skeptic Michael Shermer has written that most AI subject matter experts have a somewhat middle of the road approach, feeling manmade intelligence will usher in neither dystopia nor utopia. Instead, he noted, they “spend most of their time thinking of ways to make our machines incrementally smarter and our lives gradually better,” with Shermer citing the gradual development and continual improvement of automobiles over the last century plus.
The most optimistic forecast has AI producing flawless service robots, ending poverty, eradicating disease, and allowing immortal beings to explore deep into outer space. At the other end of the spectrum is the notion that AI will reach a point in which its capabilities so outpace ours that it will annihilate humanity, perhaps intentionally, perhaps by accident, but in either case, everyone being just as dead. Or perhaps we survive but are the ones who AI makes into servants instead of the other way around.
These more negative viewpoint posits that in the same way a more powerful and efficient brain allows humans to reign over other animals, AI could likewise surpass Mankind’s intelligence and grow beyond our control.
Many researchers believe that a superintelligence would resist attempts to shut it off or alter its path, and that we will be unable to align AI with our wishes. In contrast, skeptics such as computer scientist Yann LeCun feel such machines will have no emotion or instinct, and thus no desire to persevere.
Those with the more dour outlook site three potential problems. The first is that setting up the system may introduce unnoticed but potentially deadly bugs. This has, in, fact been the case with some space probes.
The second issue is that a system’s specifications sometimes produce unintended behavior when encountering an unprecedented scenario. Third, even allowing proper requirements, no bugs, and desirable behavior, an AI’s learning capabilities may cause it to evolve into a system with unintended behavior. For instance, an AI may flub at attempted copying of itself and instead create a successor that is more powerful than itself and without the controls in place. Swedish philosopher Nick Bostrom warns that a system which exceeds the human abilities in all domains could outmaneuver us whenever its goals conflict with ours.
Stephen Hawking argued that no physical law constrains particles from being organized so that they perform more advanced computations than the arrangements of particles in human brains, and this means superintelligence could occur. Further, this digital brain could exponentially more powerful, faster, and efficient than its human counterpart, which is limited in size because of it having to pass through a birth canal.
However, evolutionary psychologist Steven Pinker argues that the dystopian view assumes AI would prefer domination and sociopathology when it might instead choose altruism and problem-solving. Moreover, skeptic Michael Chorost said that, “Today’s computers can’t even want to keep existing, let alone” plot world domination. And such fearmongering could lead to governments or vigilantes trying to shut down valuable AI research.
Slate’s Adam Elkus has argued that the most advanced AI has only achieved the intelligence of a toddler, and even then only at specific tasks. Likewise, AI researcher Rodney Brooks opined that, “It is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. The worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence.”
Indeed, intelligence is only one component of a much broader ability to achieve goals. Magnus Vinding posits that “advanced goal-achieving abilities, including abilities to build new tools, require many tools, and our cognitive abilities are just a subset of these tools. Advanced hardware, materials, and energy must all be acquired if any advanced goal is to be achieved.”
So by the time Artificial Intelligence ever gets to a point where it could destroy us, we likely will have offed ourselves or been done in by the nature that we are said to be violating by building that AI.