Yasuo Kuniyoshi is a man with an extraordinary plan. Kuniyoshi, a professor at the University of Tokyo’s Graduate School of Information Science and Technology, has been attempting to produce an utterly convincing artificial being for the past 30 years.
“A robot,” he says, “that has developed the real ability to understand correctly what people are saying, and is able to converse and interact with them naturally, just as humans do with each other, based on its own experiences and bodily sensations.”
I’ve been thinking about Kuniyoshi’s work since seeing the recent Hollywood remake of “Ghost in the Shell.” The original anime from 1995 was hugely influential and, together with “Akira” and “Spirited Away,” is one of the most well-known Japanese animes in the West. Set in 2029, “Ghost in the Shell” depicts a world where cyborg enhancement and artificial intelligence are commonplace. It is a world where the lines between the organic and the electronic are growing ever more blurred.
It’s a future we could very well be moving toward. Last month, Elon Musk founded a new company called Neuralink, which is aimed at developing what it calls “neural lace” technology that will allow people to communicate directly with electronic machines. The lace is envisioned as a mesh of electrodes that implant into the brain. It would mean that data could be uploaded to the organic brain from computers and thoughts could be downloaded onto electronic hardware.
Musk, a billionaire who co-founded PayPal, already runs Tesla and SpaceX, but is known for his enthusiasm for big science projects. Perhaps his launch of Neuralink could spur cyborg enhancement technology to another level.
We see a Hollywood vision of this in the new “Ghost in the Shell” movie. The main character — The Major, played by Scarlett Johansson — is the most extreme kind of cyborg one can be: Her entire body has been replaced by a cyborg shell and only her brain remains of her organic former self. This is the “ghost” in the shell — her consciousness, an awareness people have that differentiates them from machines. Many people assume that consciousness is a real thing and not an illusion: the mystery of whether a machine or an artificial creation can ever have consciousness has been explored in much of our storytelling, from Mary Shelley’s “Frankenstein” to HBO’s science fiction thriller Westworld.
There are currently a number of different “species” of artificial intelligence. Some specialize in playing chess, others in driving cars and controlling the purchase of shares on stock markets. Siri, the AI we speak to on our iPhones, is very good at recognizing our voices and interpreting our questions. However, it can’t drive a car or play chess.
At present, AI cannot adapt to different tasks. Since adaptation in this sense is one of the defining characteristics of human intelligence, Kuniyoshi wants to build AIs that are more adaptable or, in other words, can think in the same way as humans. “It is necessary to understand the nature of human intelligence and the basic principles that generate human behavior,” he says.
Kuniyoshi believes that you need a body to become truly intelligent. This is the concept of embodied intelligence, the idea that a person’s thoughts are influenced by — and maybe even determined by — our relationship with the physical world. As he and his team put it when describing a bipedal robot they had built, “Human behavior arises more as a result of the constraints and interactions governed by a person’s physical traits and their environment than being something that is regulated by the central nervous system.”
It’s an extraordinary idea. It might mean that a disembodied brain — say, a brain kept alive in a tank — may not be conscious. The idea led to a re-examination of metaphors we use all the time: “I am on top of the situation,” “he/she is under my control” and “I’m feeling up today.” When people think about the future, they lean slightly forward. If embodied intelligence is correct, then human head transplants — such as that planned by maverick Italian surgeon Sergio Canavero — might work (in the sense that consciousness would be maintained). But would a head kept alive without a body have a sense of self? (It reminds me of stories of executions, in which the head of a man who had been guillotined blinks, and the mouth opens and shuts.) In “Ghost in the Shell,” The Major maintains her consciousness even when her brain is implanted into a cyborg body. (Happily for the brain, the body is in the shape of Scarlett Johansson.)
After building a humanoid robot that was able to stand and jump, Kuniyoshi then started studying how humans develop intelligence from in the womb. First, he created a virtual fetus and had it gestate in a computer simulation. The virtual fetus spontaneously showed movements similar to those in a real baby. He then made a computer model of a fetus in the 32nd week of gestation to study how the brain receives information. This extraordinary work is building a picture of how human awareness (if not, precisely, consciousness) might start to build up on the basis of feedback from the fetus’ own body. He has also made a “toddler bot” called Noby, again to understand how we learn.
I’m taking consolation from Kuniyoshi’s work. Unfortunately, I felt the remake of “Ghost in the Shell” didn’t deliver the same deep, mind-boggling material that the Japanese original did. The story and the characters were flat, and for a movie about the human soul it had surprisingly little of its own. The original anime had a spooky abstract quality that this didn’t manage — the Hollywood remake was a shell without a ghost. The good news is that Kuniyoshi is pursuing work in the real world with more than enough thought-provoking intellectual depth to fill the gap.
Rowan Hooper is the news editor of New Scientist magazine.
http://www.japantimes.co.jp/news/2017/04/15/national/science-health/crea...