In high school, my history teacher related that in the 1930s, vehicles rolling down the road averaged 5.2 occupants apiece. Cars were still a relative novelty, families who had them likely only had one, and people knew their neighbors better than today. So everyone piled into one DeSoto or Packard and headed to the dance halls and general stores.
As the Depression gave way to a postwar economy and interstate highways were built, more persons began driving, and the average number of occupants per vehicle went down. That dwindling continued until, by the teacher’s 1985 presentation, the average vehicle had 2.3 occupants. Extrapolating the trend, he deduced that by 2020, the average vehicle would have 0.5 occupants, meaning that every other car would have no one in it.
He delivered this with his usual deadpan manner, causing some in the classroom to think he really believed it. In fact, he was demonstrating how statistics can be misinterpreted by mistake or misused on purpose.
The great irony is that we now have the technology to inadvertently validate his faux prediction. Safety concerns will probably preclude that from happening, although that’s not necessarily logical. Computer cars that don’t get distracted and which have built-in safety features are better than the lunatic who almost ran me off the road this morning.
Driverless cars are a reality, although they always have a person ready to take over the navigating and negotiating of the streets if the system fails. The cars are the result of Artificial Intelligence, which has also given us automated financial transactions, Kasparov-vanquishing Deep Blue, and Semantic Scholar, a search engine for academic research.
Despite these impressive gains, the media’s treatment of AI is less than kind. This has always been the case. While concepts like The Matrix and The Terminator far pre-date Shakespeare, even his brilliant work paints a motif about the omnipotence of providence and royalty. Macbeth and similar tragic characters mess with these designs at their peril. From Frankenstein to The Twilight Zone, and even in real life examples such as the first test tube baby, many persons assume ominous results if humans venture beyond what a god or nature has allotted them.
Hollywood can largely be forgiven. A movie about AI being used to seamlessly improve a car dealership’s algorithms would likely not be a blockbuster even if you spotted it Alec Baldwin and Renée Zellweger. The mainstream press, however, has no such excuse for its sensationalism. One example of the media going overboard is how it handled an open letter about AI’s future which was penned by the Future of Life Institute. The letter read in part, “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”
According to Popular Science, this measured idea was turned into something more disconcerting. Headlines blared, “Artificial intelligence experts sign open letter to protect mankind from machines” and “Experts pledge to rein in AI research.” Contributing to the angst are Elon Musk and Stephen Hawking, two giants in their fields who go beyond their areas of considerable expertise to warn of AI calamities. That they are speaking beyond their normal fields is not a reason to dispute what they are saying – no genetic fallacies here – but their lack of substantiation and support are the issue.
The panic most often takes the form of contemplating what happens when the machines that Mankind has invented reach the Singularity. This is the moment at which AI is capable of improving itself. This, in terrifying theory, could be used to enslave, destroy, or at least inconvenience us.
But at what point would this be possible, and what precisely is AI? Per the Oxford Dictionary, AI refers to computer systems that are able to perform tasks that had previously required human intelligence, such as visual perception, speech recognition, decision making, and language translation.
The Internet has more information stored in it than the most knowledgeable person ever, by a very comfortable margin. But there is a difference between knowledge and intelligence so the Internet would not by itself by AI, though an Internet that could search itself might be. Also, it can be used to facilitate other AI notions, such as those outlined by Oxford.
But this is getting way, way ahead of ourselves and our technology. Computer scientist Oren Etzioni explained in Popular Science why the Singularity is a long ways off, if it’s even plausible.
“We’ve had some real progress in areas like speech recognition, self-driving cars and AlphaGo,” he said. “But we have many other problems to solve in creating artificial intelligence, including reasoning. For instance, a machine would have to be able to understand that 2+2=4 and not just calculate it. Natural language understanding is another example. Even though we have AlphaGo, we don’t have a program that can read and fully understand a simple sentence. The true understanding of natural language, the breadth and generality of human intelligence, our ability to both play Go and cross the street and make a decent omelet are all hallmarks of human intelligence. All we’ve done today is develop narrow savants that can do one little thing super well.”
Besides being a long ways off, if even possible, we cannot say with certainty that AI would even result in what alarmist headlines suggest. Would reaching the Singularity be detrimental in the form of Asimov-defying robots, would it be beneficial like the Jetsons’ maid, or would it be something neutral, like AI keeping itself entertained because we were too slow to be of interest to it?
The entire concept is predicated on well above average human intelligence being achieved, perhaps even going so far as accomplishing the accumulation of as much intelligence as is possible. As such, AI could resolve conundrums we never considered solvable or even knew existed. This super advanced knowledge could include realizing the benefits of altruism, causing AI to gift us with immortality, beyond warp speed travel, and the ability levitate objects so we can retrieve the Doritos without getting up.
Or maybe none of this happens, good or bad, so for now, there’s no reason to arrest a developing technology.
For the risk to become real, a sequence of ‘ifs’ would have to occur: 1. Scientists would have create a human equivalent of AI. 2. This hypothetical HAL would need to achieve a full understanding of how its inner workings function. 3. The AI would need both the desire and means to improve itself. For instance, it might gain the knowledge of how to build a better version, but lack the requisite appendages to do so. 4. If achieved, this self-improvement would need to be able to be continued until it reached a still-undefined superintelligence. 5. It would need to accidentally or intentionally start using this superintelligence to annihilate us. 6. In the decades or centuries leading up to this, our top scientists and computer programmers would need to have failed to account for this or have an effective safety valve in place.
To be fair, working on the issue outlined in number six is what the alarmists are getting at. But right now we are so far from this that we wouldn’t know how to approach the problem. We don’t know what form a malevolent AI would take or how to start working against it. Science works best when it concerns itself with what is observable, knowable, and testable, and these qualifiers currently allow for no room for plotting a preemptive strike against an invading android army.