A Step Closer to Self-Aware Machines
Robots that are self-aware have been science fiction fodder for decades, and now we may finally be getting closer. Humans are unique in being able to imagine themselves—to picture themselves in future scenarios, such as walking along the beach on a warm sunny day. Humans can also learn by revisiting past experiences and reflecting on what went right or wrong. While humans and animals acquire and adapt their self-image over their lifetime, most robots still learn using human-provided simulators and models, or by laborious, time-consuming trial and error. Robots have not learned to simulate themselves the way humans do.
Columbia Engineering researchers have made a major advance in robotics by creating a robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm—it has no clue what its shape is. After a brief period of “babbling,” and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body. The work is published today in Science Robotics.
Video of Columbia Engineering robot that learns what it is, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot has no clue what its shape is. After a brief period of “babbling,” and within about a day of intensive computing, the robot creates a self-simulation, which it can then use to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its body.
To date, robots have operated by having a human explicitly model the robot. “But if we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it’s essential that they learn to simulate themselves,” says Hod Lipson, professor of mechanical engineering, and director of the Creative Machines lab, where the research was done.
For the study, Lipson and his PhD student Robert Kwiatkowski used a four-degree-of-freedom articulated robotic arm. Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters. The self-model performed a pick-and-place task in a closed loop system that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model. With the closed loop control, the robot was able to grasp objects at specific locations on the ground and deposit them into a receptacle with 100 percent success.
Even in an open-loop system, which involves performing a task based entirely on the internal self-model, without any external feedback, the robot was able to complete the pick-and-place task with a 44 percent success rate. “That’s like trying to pick up a glass of water with your eyes closed, a process difficult even for humans,” observed the study’s lead author Kwiatkowski, a PhD student in the computer science department who works in Lipson’s lab.
The self-modeling robot was also used for other tasks, such as writing text using a marker. To test whether the self-model could detect damage to itself, the researchers 3D-printed a deformed part to simulate damage and the robot was able to detect the change and re-train its self-model. The new self-model enabled the robot to resume its pick-and-place tasks with little loss of performance.
Lipson, who is also a member of the Data Science Institute, notes that self-imaging is key to enabling robots to move away from the confinements of so-called “narrow-AI” towards more general abilities. “This is perhaps what a newborn child does in its crib, as it learns what it is,” he says. “We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.”
Lipson believes that robotics and AI may offer a fresh window into the age-old puzzle of consciousness. “Philosophers, psychologists, and cognitive scientists have been pondering the nature self-awareness for millennia, but have made relatively little progress,” he observes. “We still cloak our lack of understanding with subjective terms like ‘canvas of reality,’ but robots now force us to translate these vague notions into concrete algorithms and mechanisms.”
Lipson and Kwiatkowski are aware of the ethical implications. “Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control,” they warn. “It’s a powerful technology, but it should be handled with care.”
The researchers are now exploring whether robots can model not just their own bodies, but also their own minds, i.e. whether robots can think about thinking.
Learn more: A Step Closer to Self-Aware Machines
The Latest on: Self-aware machines
via Google News
The Latest on: Self-aware machines
- The Roaring 20’s and the #Futureof2020on January 7, 2020 at 12:55 pm
Over time, AI ethics will emerge taking people centered principles. The future is autonomous. Machines will deliver services that are continuous, auto-compliant, self-healing, self-learning, and ...
- Will There be a ‘Chappie’ Sequel?on December 28, 2019 at 9:12 am
What does it mean for humanity if a machine can somehow develop consciousness ... It made them not only question, but also feel for the self-aware robot, Chappie. It is noted that Blomkamp wrote ...
- The Crazy Government Research Projects You Might've Missed in 2019on December 23, 2019 at 5:33 am
The Competency-Aware Machine Learning program, launched in February, looks to enable AI systems to model their own behavior, evaluate past mistakes and apply that information to future decisions. If ...
- Four U.S. technology companies take on self-aware artificial intelligence (AI) and machine learningon October 13, 2019 at 5:00 pm
ARLINGTON, Va. – Artificial intelligence (AI) experts at four U.S. companies are helping military researchers determine if autonomous machines are self-aware of their own competencies and limitations ...
- Artificial intelligence (AI) experts at SRI International to investigate self-aware machine learningon September 25, 2019 at 5:00 pm
Artificial intelligence (AI) experts at SRI International in Menlo Park, Calif., are helping U.S. military researchers determine if autonomous machines are self-aware of their own competencies and ...
- Curious About Consciousness? Ask the Self-Aware Machineson July 11, 2019 at 8:11 am
“I want to meet something that is intelligent and not human.” But instead of waiting for such beings to arrive, Lipson wants to build them himself — in the form of self-aware machines. To that end, ...
- Could machines become self-aware?on July 4, 2019 at 2:02 am
From RTÉ 2fm's Chris and Ciara show, a discussion on a new study which shows that one in four people admit that they are sexually attracted to their AI voice assistant Of course, one might question ...
- AI BREAKTHROUGH: Scientists build ‘self-aware’ robot able to REPAIR ITSELFon February 18, 2019 at 12:01 am
Consciousness is key to understanding what makes humans human. And in the 70 years since artificial intelligence first emerged as an academic pursuit, scientists have attempted to synthesise sentience ...
- The first step towards self-aware machines?on February 6, 2019 at 10:38 am
Then, using a machine learning technique called deep learning ... While current AI is what is considered "narrow AI", this research could pave way for more general AI - i.e. self-aware artificial ...
- Robot, know thyself; machines get more self-awareon February 4, 2019 at 3:46 pm
NEW YORK (AP) — Forget dreaming of electric sheep. Robots first need to figure out how to imagine themselves. One New York robot has done just that. It's learned that it's a robotic arm using a ...
via Bing News