Written by IEEE | November 8, 2016 | Updated: March 30, 2017
Ben Goertzel of Hanson Robotics and Andra Keay of Silicon Valley Robotics took the AutoTech – Talk Robot stage at Web Summit to debate robotics — whether or not they should bond with humans emotionally and socially.
Goertzel is in favor of positive interaction between humans and AI. He compared the way humans interact to how robots are programmed — we’re programmed by evolution, and robots are programmed by humans. His argument was that humans can intelligently program robots to learn about loving behavior.
On the topic of robotics who have human-like expressions and aesthetics, Goertzel explains that “it’s a matter of personal taste and preference,” but that he understands people who prefer the film-like AI as much as he does human-like.
Andra Keay took the opposite stance, explaining that she feels humanoid robots are a “dangerous, deceptive and evolutionary dead-end.” Keay’s argument stems from regulation — just because we can build humanoid robots doesn’t mean we should. It’s a question of ethics, privacy, etc. when your likeness can be transferred to a robot’s appearance. Keay calls into question who’s at the end of the humanoid robots and where the data is going — saying that she fears humanoid robotic pursuits would damage the broader robotics industry.
Goertzel says that to regulate consumer products that are in demand is ultimately a bad idea — one that he calls more terrifying than humanoid robots. His ultimate defends is founded on freedom for consumer choice, and one that will help with the overall transition to AI in our everyday lives.
Keay disagrees on the principle of over-promise; the technology simply cannot live up to the expectations it faces at this time. She equates C3PO and R2D2-style robots from the Star Wars franchise as a more transparent option for consumer-level robots.
Both panelists agree on the importance of democratizing technology, with Keay going on to explain just how expensive humanoid robots are, and how that alone introduces privilege and disadvantage.
Goertzel’s closing argument was essentially the same as his initial statement on the stance of personal freedom. Keay’s argument isn’t so dissimilar and routes back to a foundation of ethical design guidelines. She campaigns for the idea of transparency, saying that humanoid robots shouldn’t be built unless they become essential.