University of Hertfordshire develops universal concept to regulate robot behaviour
Artificial intelligence experts from the University of Hertfordshire, Dr Christoph Salge and Professor Daniel Polani, have designed a concept which could lead to a new set of generic, situation-aware guidelines to help robots work and co-exist successfully alongside humans.
Empowerment, which has been developed over the course of twelve years, is discussed in the latest edition of the journal Frontiers today (Thursday 29 June), as a potential replacement for Asimov’s celebrated Three Laws of Robotics – the most famous set of guidelines to govern robotic behaviour to date.
The paper shows how Empowerment has the potential to equip a robot with guidelines or motivations that cause it to a) protect itself and keep itself functioning b) do the same for a human partner c) stick around and follow the human’s lead. In the future this principle could be implemented on a range of robots that interact closely with humans in challenging environments, such as elder care robots, hospital robots, self-driving cars or exploration robots.
Empowering robots to change their environment
Motivated by the term from sociology and psychology, empowerment stands for the opposite of helplessness; it is the ability to change one’s environment and to be aware of that possibility. Over the past twelve years, leading University of Hertfordshire researchers have been developing ways to translate this social concept into a quantifiable and operational mathematical/technical language, endowing robots with a drive towards being empowered.
The principle of empowerment states that an agent should attempt to keep its options open, and will try to move to states in its world where it has the most options it can reliably attain. Since 2005, when it was first introduced, researchers have generalized the empowerment principle and applied it to various scenarios. The resulting behaviours are surprisingly “natural” in many cases, and typically only require the robot to know the dynamics of the world, but no specialized Artificial Intelligence behaviour coded for the particular scenario.
Empowerment has also already begun to be adopted by pioneers in artificial intelligence, such as Google DeepMind.
Need for ethical standards and guidelines for robots
Dr Christoph Salge, Research Fellow at the University of Hertfordshire said, “There is currently a lot of debate on ethics and safety in robotics, including a recent a call for ethical standards or guidelines for robots. In particular there is a need for robots to be guided by some form of generic, higher instruction level if they are expected to deal with increasingly novel and complex situations in the future – acting as servants, companions and co-workers.
“In the challenging scenarios of the future, we will not be able to rely on a clearly defined functionality that requires robots to be safely separated from humans, or the scenarios to be simplistic or very well defined in advance.”
“Imbuing a robot with these kinds of motivation is difficult, because robots have problems understanding human language and specific behaviour rules can fail when applied to differing contexts. For example, some robots will have automatisms that stop moving whenever they encounter resistance, as a typical safety feature to avoid damaging themselves or injuring a human. But there might be a situation where a robot actually should move to provide a safer space – for instance, to move something away from the human, to get out of the human’s escape route, or to actively block the human from stepping into a dangerous trajectory.”
“From the outset, formalising this kind of behaviour in a generic and proactive way poses a difficult challenge. We believe that our approach can offer a solution.”
Daniel Polani, Professor of Artificial Intelligence at the University of Hertfordshire, added: “As we toyed with the idea of using empowerment in more complex situations, we realized that several of the original goals of the Three Laws of Robotics by Asimov might be addressable in the context of empowerment.
“While much of the public discourse is about how it is difficult or impossible to rein in robots’ behaviour, and most certainly in keeping robots – in the most naive sense – ‘ethical’, in the paper we discuss possibilities to map such requirements into the formal and operational language of empowerment.”
The Latest on: Robot behavior
- 64% of people trust a robot more than their manager according to an Oracle studyon October 16, 2019 at 11:19 am
When asked what robots can do better than their managers ... and an experience that is personalized to their behavior (35 percent). • Privacy (41 percent) and security (40 percent) are the main ...
- Robots in the classroom: Could robots replace tutors, teachers?on October 16, 2019 at 4:23 am
But, don’t expect Siri or Alexa to take over for Mrs. Jones right away. “Although robot tutors can operate autonomously in restricted contexts, fully autonomous social tutoring behavior in ...
- OpenAI uses new simulation technique to teach a robot how to solve a Rubik’s Cubeon October 15, 2019 at 1:08 pm
OpenAI believes that the training technique may have applications in far more serious projects, too. The method could potentially enable industrial robots, drones and other autonomous machines to ...
- New Study: 64% of People Trust a Robot More Than Their Manageron October 15, 2019 at 12:18 pm
Workers Trust Robots More Than Their Managers The increasing adoption of AI at work is having ... and an experience that is personalized to their behavior (30 percent). Security (31 percent) and ...
- Researchers build a soft robot with neurologic capabilitieson October 15, 2019 at 8:38 am
The findings have implications for neuroprosthetics, as well as for neuromorphic computing, an emerging technology with the potential to allow high volume information processing using small amounts of ...
- Shape-shifting robots show new locomotion strategyon October 15, 2019 at 12:09 am
... and taking advantage of the group capabilities that arise by combining individuals — could provide mechanically based control over very small robots. Ultimately, the emergent behavior of the group ...
- We trust social robots so much that we are likely to share sensitive data with themon October 14, 2019 at 5:12 am
For example, in certain scenarios, the presence of a robot can have a big impact on people’s willingness ... associated with robotics – the social impact it has on people’s behavior, as well as the ...
- Researchers promote sex robots that can turn down sex with their ownerson October 13, 2019 at 10:52 pm
They argue that robots can be designed to help humans become more virtuous: either by exhibiting virtuous behavior or “nudging human behavior directly.” Sex robots should have a consent module to ...
- Autonomous shuttles in Northern Virginia suburb show why the future of robot cars might be slowon October 12, 2019 at 2:14 pm
robots, not so much. Chin said driving in a parking lot can be quite challenging — demonstrated in recent online videos of electric Teslas struggling with a new feature that allows their owners to ...
- Why Facebook’s AI guru isn’t scared of killer robotson October 10, 2019 at 1:08 pm
The debate isn’t about the general idea of killer robots. It’s about instrumental convergence ... not to have a mechanism by which new terms in the objective could be added to prevent ...
via Google News and Bing News