As headlines profess the coming age of robots, fear of job insecurity and technological overdependence continue to rise. However, as voices lamenting the onslaught of artificial intelligence crowd the conversation, basic elements of our understanding of this technological revolution are neglected. What exactly is AI, and how can it be used? What are the benefits of growing automation? And how warranted are the concerns some have about the swift advances in robotics?
Award-winning author Thomas Ramge is here to separate fact from fad with his most recent book, Who’s Afraid of AI?: Fear and Promise in the Age of Thinking Machines. Today, Ramge answers the AI-related questions most are burning to ask—and to give a sneak-peek of the revelatory information in Who’s Afraid of AI?.
Your book’s subtitle aligns AI with “thinking machines.” What allows AI to think, and how does that differ from a regular machine?
Thomas Ramge: AI does not think as humans do, with our ability to find creative solutions when presented with a problem. So-called intelligent machines simulate human thinking in fairly narrow contexts, using heaps of data and the tools of statistics. They learn from examples and then apply what they’ve learned within a given framework. That is a giant leap in the history of information-technology development. Artificial intelligence is the next step in automation. Heavy equipment has done our dirty work for a long time. Manufacturing-robots have been getting more adept since the 1960s. However, until now, IT systems have only assisted with the most routine knowledge-work. But with artificial intelligence, machines are now making complex decisions that only humans had been able to make. If both the underlying data and the decision-making framework are sound, AI systems will make better decisions more quickly and less expensively than truck drivers, administrative staff members, sales clerks, doctors, investment bankers, and human resource managers, among others.
Why is it important to start educating ourselves on AI?
TR: Artificial intelligence is having its Kitty Hawk moment. After many years of relatively slow and underwhelming progress, the technology is finally starting to perform; now, a cascade of breakthroughs—from face recognition to personal digital assistants, from autonomous driving to health-diagnosis tools—are flooding the market with many more in the works. Anyone who wants to explore the opportunities and risks of a new technology must first understand the basics. What is artificial intelligence, anyway? What is it capable of today, and what will it be capable of in the foreseeable future? In my book, I’m searching for comprehensive and comprehensible answers among the extreme scenarios—whether techno-utopian or apocalyptic—getting so much attention these days.
What types of AI do you use?
TR: Obviously there’s plenty of machine-learning built in my smartphone, which I use far too excessively, by the way. The phone unlocks with face recognition and sometimes I ask Google Assistant for directions. On the shopping apps I use, the recommendation systems are learning from data that I generate in order to get to know me better and make me buy more stuff. Recently I went to a dermatologist who fed an AI system with a picture of a rather dark mole on my arm. The system declared the spot was not malicious and, thankfully, the doctor agreed. I can’t wait for autonomous driving, though. I just hate to waste time behind the steering wheel. And, as I’m not a very good driver, I trust an intelligent car will statistically make much safer driving decisions than I would.
Your most recent book, The Global Economy as You’ve Never Seen It, gives a visual crash-course on international business and finance. How do you see AI factoring in our global economy?
TR: AI will transform many industries all over the world by taking automation to the next level. It will especially affect white-collar knowledge workers. They will experience what blue-collar workers have been experiencing for quite a while: Machines can take over jobs that seemed safe for humans. Meanwhile, more capable and intelligent robots will bring some production back to countries with high labor costs, partly reversing the offshoring effects caused by globalization in the last three decades. And at the same time, the rise of AI technology will further speed up the rise of China as a global economic superpower. China is already an AI superpower thanks to massive investment and the humongous amounts of data that Chinese users generate. Many Chinese AI applications are arguably superior to their Silicon Valley equivalents. This will show in economic figures soon and might give the current trade disputes between the US and China a new spin.
What’s the most interesting AI development you’ve seen? What development do you wish would happen?
TR: I hope AI will bring steep advancements in medicine and pharmacology. Presently, machine-learning is our best bet for better cancer-treatment and new drugs that might at least slow down diseases we haven’t found a cure for at all yet, like dementia.
The book examines the theory that AI could one day surpass humans. Have you ever envisioned a possible future where AI rules over humanity? Say that future does occur: Do you think AI would be a benevolent or malevolent ruler?
TR: The good news first: Despite all the talk about an emerging superintelligence, artificially intelligent systems will not enslave humanity in the foreseeable future. Nobody knows what computers will be able to do in two hundred years, but we know for now that computer scientists don’t know any technological path that could lead to a superhuman, artificial dictator wiping out humanity and keeping a few of our species in a zoo for superintelligent machines’ amusement. The end of the world has been postponed once again. That said, the machine-control question will be increasingly important and an always-considered element of any further development of AI systems. At a certain point, humans might need to build switch-off mechanisms to keep machines safe, just as nuclear power plants need those safety features. But for now, I consider it much more likely that human stupidity will wipe us off the planet rather than artificial intelligence—by instigating nuclear war, toying with extremely dangerous viruses, or remaining reluctant to address climate change. No, we don’t have to be afraid of science-fiction scenarios where machines take control. But we do have to be wary of humans using AI to manipulate and oppress other humans. That is not a challenge for future generations, but one for here and today.
What’s the best way to guard against the potential misuse of AI by corporations and governments?
TR: That’s a very interesting question, as governments can guard against commercial misuse by regulation—but, of course, that doesn’t work the other way around. No matter whether governments or corporations use AI applications; all must be fair and safe, and they cannot discriminate against individuals or social groups, nor should they intrude on privacy in an inappropriate manner and without the consent of its users. Governments all over the world must level-up their digital IQ to find smart ways to regulate AI—in a responsible fashion, without throttling AI innovation. The European General Data Protection Regulation (GDPR) is far from perfect or comprehensive but it is a first step toward human-centered regulation in the age of AI. The even trickier question: Who can guard against potential misuse of AI by governments? In a liberal democracy, the only possible answer can be we, the people. And we will have to rely on strong constitutional institutes to safeguard individual rights. AI technology offers a whole new tool kit for governments to police and manipulate citizens. In many parts of the world, autocratic regimes already make plentiful use of AI to sustain or enlarge their power. In democracies, we must make sure that anti-democratic and populist movements don’t use technology to drag us along a slippery slope towards autocracy, undermining our values and freedom through technology.
Do you see an ethical dilemma with AI’s growing presence in our lives?
TR: No. By definition, a dilemma is a situation that you cannot solve. We encounter many dilemmas over the course of our lives and that will not change with AI—not for better and not for worse. But, by making intelligent use of intelligent machines, we will be able to solve problems that we haven’t been able to cope with. AI will help humans make better decisions in situations that, in the past, were poorly performed. And in many contexts, using intelligent machines will be an ethical imperative. Autonomous driving is a prime example. Hundreds of thousands of people worldwide die each year in car accidents. When we delegate this task to machines, self-driving cars will roam the streets more safely than flawed humans. The same goes for AI systems that surpass human oncologists’ abilities to diagnose cancer. That said, in the current AI hype, we should remind ourselves: AI systems that are skillfully programmed and fed the proper data are useful experts within narrow specialties. But they remain idiot savants. They lack the ability to see the big picture. The important decisions—including the decision about how much machine assistance is appropriate—remain human ones. Artificial intelligence cannot relieve us of the burden to think.