A friend asked me yesterday when artificial intelligence is going to take over the world and kill all humans. It’s a rather casual question at a Summer barbecue party, a question that I predict to come up more and more frequently in the next years.
AI was uninteresting for a long time. Yes, it was hot for a while in the 90s, but then it has not-worked for so long that it acquired a bad reputation. Even when it did work it was doing only computer stuff like playing silly games or fly airplanes: none of them feels really human or seems to be that hard in the first place.
Now however, artificial intelligence is capable of mind-blowing things. It understands what you say to them. It responds. It translates what you see or hear, on the fly.
Have you ever seen kids say “Siri, do you love me?” It’s not uncommon at all. (And, apparently, you don’t know what sexy means until you’ve heard a guy with a slight Indian accent slowly enunciate “I want to have sex with you” to his texting app.)
Next-generation AI is all around us. It drives our cars, watches over our home and family, translates texts on our phone.
The problem with humans and our inventions is that we have a tendency to mess things up at first try. The more power we give to semi-perfect artificial intelligence, the more damage it can cause: Tesla’s autopilot made the decision to drive under a trailer last week, killing the driver in the accident. And, Google’s Nest thermostat seems to have an appetite to freeze people’s houses every now and then.
On the big picture, AI does make our life safer. The real problem is that when it goes wrong, we don’t even understand what has just happened. In normal accidents we tend to know what was going on: the thermostat broke. The engine stalled. The driver fell asleep.
When it comes to AI, most of the time we have no idea what was going on in the computer’s “brain”.
This isn’t a human move
Whenever your phone’s camera uses face recognition to identify the areas to set the focus on, it uses simple algorithms that search for a face’s core features like skin colours or the a position of the eyes.
Modern AI can go many steps further: it can also recognise things like whether you have a hat on, whether you’re smiling, or if it really isn’t you but a dog. Or an orange. Modern AI can actually tell what it sees on a picture.
Unless it can’t. In an experiment on the University of Wyoming, researchers were able to fool cutting-edge deep neural networks using simple, random-generated images. For example, artificial intelligence looked at this first picture and said, with a over 99 percent certainty: it’s a centipede.
What’s interesting here is not that researchers can bring up a state-of-the-art image recognition algorithm and trick it into being wrong. What’s interesting is that most of the time no one can tell where exactly did it go off track.
When you show a picture to a kid and they say something funny, we can understand how their brain worked: it’s not a cat, it’s a lion. When it comes to deep neural networks, even if they are right, we don’t even know why exactly they are right. We don’t share the context with them.
Deep neural networks learn in a similar way children do. You show a picture to the computer and say what’s on it. When the algorithm has seen enough photos of the same thing, it will be able recognise a similar object on a new picture too. However, the exact algorithm it created while learning is a black box for us: we can access it, we can see it, but won’t understand what it all means, because it’s not the way we think.
Go, the ancient Chinese game has long been viewed as the most challenging classic game for artificial intelligence: the complexity of the board’s positions and moves is enormous, which can’t be computed with standard algorithms.
People can navigate Go using intuition, which is something that doesn’t help much on computers. Beating a human player was therefore an unsolved problem for over twenty years — up until 2016, when the computer AlphaGo won two-games-to-none against grandmaster Lee Sedol.
Google’s AI didn’t only play better than a human, but also, played in a way no human ever would. In the 19th move, AlphaGo dropped a black piece into some empty space on the board. “That’s a very surprising move,” commentator Michael Redmond explained, “I thought it was a mistake. It’s something that I don’t think I’ve seen in a top player’s game.”
It’s very natural to be afraid of something we don’t understand. Especially when it develops extremely fast.
Humans are quite good in forecasting things that are linear in nature. If you know that your friend covered a kilometre in twelve minutes, you can pretty well estimate where they would be in half an hour.
It’s very difficult for us however, to comprehend the results of non-linear events.
In a well-known fable the mathematician helps the empire, and the emperor offers him one wish in return. The mathematician takes out a chessboard and says that he only wants one grain of rice on the first square, two pieces on the second, and so on, for each square doubling the number of grains as the square before.
The emperor is happy to fulfil the mathematician’s wish — only to learn that it’s a lot more rice than he would think. A lot more rice than most of you reading this would think in fact: it’s enough to cover the entire Earth.
Since 1965, the average computer became twice as fast every second year. Your smart phone that fits in your pocket, is faster than the desktop computer you had ten years ago.
Scientist already simulated half a virtual mouse brain on a supercomputer in 2007, and so by today you can pretty much do the same on your laptop.
Your computer works as fast as the brain of a mouse.
While a mouse brain is about three magnitudes less complex than a human brain, at this point you shouldn’t be surprised to learn that computers will soon be as fast as your brain. And, a couple of years after that: one single computer will be faster than every human brain combined.
Scientists have recently sounded the alarm: Stephen Hawking, Bill Gates and Elon Musk are openly talking about regulating a potential platform of extreme intelligence.
When do the infinite monkeys come?
However fast computers evolve, the best feature of the human brain isn’t that it computes things extremely reliably or quickly. That’s why we use calculators to sum up numbers. The next-level of AI we experience these days is still only at the level of automation.
We’ve been automating and inventing things for thousands of years. Does Google make you stupid? Now you can just look up all the things on the internet instead of remembering them, using your brain. The same question arose the first time people started using paper to put down their thoughts.
Don’t use that notebook! It makes you stupid.
It’s not likely that real super-intelligence would happen in our lifetime. Even after that, if there is one thing that humans are genuinely good at, it’s adapting to new environments.
My great-great grandmother would be terrified in today’s world of 8-lane-highways, microwave and Snapchat, but for me it’s all fine. And while I probably wouldn’t fit in well in the AI-first world of 2100s, I’m quite sure my n-ex-great-grandchildren will feel right at home.
For the very long run, some say that computers will eventually have goals and purposes just like we have. Some others say, our free will doesn’t even exist.
Either way, computers are not likely to go on killing us anytime soon.
What I’m not quite sure of is, what happens when those 3.5 million professional truck drivers in the US, the 1-in-15 people who work in the logistics industry lose their jobs in the next 10 years, pretty much all at once, to self-driving trucks.
It’s people who do hurt people.