Robots can now move safely among us using socially aware navigation

Engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.
Courtesy of the researchers

Approach may enable robots to move around hospitals, malls, and other areas with heavy foot traffic.

Just as drivers observe the rules of the road, most pedestrians follow certain social codes when navigating a hallway or a crowded thoroughfare: Keep to the right, pass on the left, maintain a respectable berth, and be ready to weave or change course to avoid oncoming obstacles while keeping up a steady walking pace.

Now engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.

In drive tests performed inside MIT’s Stata Center, the robot, which resembles a knee-high kiosk on wheels, successfully avoided collisions while keeping up with the average flow of pedestrians. The researchers have detailed their robotic design in a paper that they will present at the IEEE Conference on Intelligent Robots and Systems in September.

“Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians,” says Yu Fan “Steven” Chen, who led the work as a former MIT graduate student and is the lead author of the study. “For instance, small robots could operate on sidewalks for package and food delivery. Similarly, personal mobility devices could transport people in large, crowded spaces, such as shopping malls, airports, and hospitals.”

Chen’s co-authors are graduate student Michael Everett, former postdoc Miao Liu, and Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT.

Social drive

In order for a robot to make its way autonomously through a heavily trafficked environment, it must solve four main challenges: localization (knowing where it is in the world), perception (recognizing its surroundings), motion planning (identifying the optimal path to a given destination), and control (physically executing its desired path).

Chen and his colleagues used standard approaches to solve the problems of localization and perception. For the latter, they outfitted the robot with off-the-shelf sensors, such as webcams, a depth sensor, and a high-resolution lidar sensor. For the problem of localization, they used open-source algorithms to map the robot’s environment and determine its position. To control the robot, they employed standard methods used to drive autonomous ground vehicles.

“The part of the field that we thought we needed to innovate on was motion planning,” Everett says. “Once you figure out where you are in the world, and know how to follow trajectories, which trajectories should you be following?”

That’s a tricky problem, particularly in pedestrian-heavy environments, where individual paths are often difficult to predict. As a solution, roboticists sometimes take a trajectory-based approach, in which they program a robot to compute an optimal path that accounts for everyone’s desired trajectories. These trajectories must be inferred from sensor data, because people don’t explicitly tell the robot where they are trying to go.

“But this takes forever to compute. Your robot is just going to be parked, figuring out what to do next, and meanwhile the person’s already moved way past it before it decides ‘I should probably go to the right,’” Everett says. “So that approach is not very realistic, especially if you want to drive faster.”

Others have used faster, “reactive-based” approaches, in which a robot is programmed with a simple model, using geometry or physics, to quickly compute a path that avoids collisions.

The problem with reactive-based approaches, Everett says, is the unpredictability of human nature — people rarely stick to a straight, geometric path, but rather weave and wander, veering off to greet a friend or grab a coffee. In such an unpredictable environment, such robots tend to collide with people or look like they are being pushed around by avoiding people excessively.

“The knock on robots in real situations is that they might be too cautious or aggressive,” Everett says. “People don’t find them to fit into the socially accepted rules, like giving people enough space or driving at acceptable speeds, and they get more in the way than they help.”

Training days

The team found a way around such limitations, enabling the robot to adapt to unpredictable pedestrian behavior while continuously moving with the flow and following typical social codes of pedestrian conduct.

They used reinforcement learning, a type of machine learning approach, in which they performed computer simulations to train a robot to take certain paths, given the speed and trajectory of other objects in the environment. The team also incorporated social norms into this offline training phase, in which they encouraged the robot in simulations to pass on the right, and penalized the robot when it passed on the left.

“We want it to be traveling naturally among people and not be intrusive,” Everett says. “We want it to be following the same rules as everyone else.”

The advantage to reinforcement learning is that the researchers can perform these training scenarios, which take extensive time and computing power, offline. Once the robot is trained in simulation, the researchers can program it to carry out the optimal paths, identified in the simulations, when the robot recognizes a similar scenario in the real world.

The researchers enabled the robot to assess its environment and adjust its path, every one-tenth of a second. In this way, the robot can continue rolling through a hallway at a typical walking speed of 1.2 meters per second, without pausing to reprogram its route.

“We’re not planning an entire path to the goal — it doesn’t make sense to do that anymore, especially if you’re assuming the world is changing,” Everett says. “We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing.”

Crowd control

Everett and his colleagues test-drove the robot in the busy, winding halls of MIT’s Stata Building, where the robot was able to drive autonomously for 20 minutes at a time. It rolled smoothly with the pedestrian flow, generally keeping to the right of hallways, occasionally passing people on the left, and avoiding any collisions.

“We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that,” Everett says. “One time there was even a tour group, and it perfectly avoided them.”

Everett says going forward, he plans to explore how robots might handle crowds in a pedestrian environment.

“Crowds have a different dynamic than individual people, and you may have to learn something totally different if you see five people walking together,” Everett says. “There may be a social rule of, ‘Don’t move through people, don’t split people up, treat them as one mass.’ That’s something we’re looking at in the future.”

Learn more: New robot rolls with the rules of pedestrian conduct

 

The Latest on: Socially aware robot navigation

via Google News and Bing News

Autonomous underground excavation robot with intelligent navigation for urban environments

via UC3M

Researchers from Universidad Carlos III de Madrid (UC3M) are leading the implementation of a new kind of autonomous underground robot with intelligent navigation for urban environments. The system, developed within the framework of the European research project BADGER, aims to become a model for excavation technologies because of its high economic and social impact.

Participating in the project, which is coordinated by the UC3M Robotics Lab, are researchers from Germany, Greece, Italy and the United Kingdom. Its goal is to develop an intelligent system for the autonomous excavation of small diameter, high gradient tunnels in urban environments. “The use of innovative localization, mapping and navigation techniques, along with sensors and georadars, will allow them to be adapted to different land surfaces and aid in the analysis of the work environment and decision making in attaining the goals,” stated project coordinator, Carlos Balaguer, full professor in the UC3M Department of Systems Engineering and Automation.

BADGER (roBot for Autonomous unDerGround trenchless opERations, mapping and navigation) is a project that incorporates several innovations. The main one is this new application of robotics to an underground environment. Until now, robotics was a field focused especially on walking or rolling surface robots and flying or underwater robots. Autonomous navigation, meanwhile, is another of the project’s strong points, as all the sensors, georadars and computers will be integrated in the machine, which enables them to make a much more precise and controlled exploration of the land. Lastly, the use of ultrasound techniques to perforate the ground, accompanied by a 3D printer on the robot itself to reinforce the tunnel where the cables and tubes lie, also plays an essential role.

Sustainability in cities and support for rescue efforts

According to the researchers, BADGER is a distinctly ecological robot because it enables sustainable transformation of very congested environments such as modern cities. Thus, it will have an intelligent system that allows installations such as wiring or scoring to be built without trenches or the rerouting of traffic. “Given that the whole process will take place underground, noise pollution and contamination will be reduced,” said the researchers.

The implementation of these advanced robotics technologies, with cognitive and control abilities, has multiple applications. “It will notably increase European competitiveness in search and rescue operations (landslides), mining activities, applications with civilian use like water pipes, gas and fiber optics, exploration techniques, mapping, etc.,” Balaguer pointed out.

Learn more: Intelligent underground robot for urban environments is designed

 

The Latest on: Autonomous underground excavation robot

via Google News and Bing News

Volvo’s robot refuse collectors ROAR into life

via Volvo

via Volvo | Gizmag

The ROAR project is aimed at showing how machines can communicate with each other and how, in the future, they will be able to carry out tasks now undertaken by humans

In both an impressive display of innovative technology and a glimpse of a future in which humans could be redundant, Volvo has shown off its Robot-based Autonomous Refuse handling (ROAR) project. The system uses drones to locate refuse bins and robots to collect and empty them.

For the project, Volvo has collaborated with Chalmers University of Technology, Mälardalen University and Penn State University, all of which are part of its Academic Partner Program. Waste management firm Renova is also involved. The aim is to show how machines can communicate with each other and how, in the future, they will be able “to facilitate everyday life in a large number of areas.”

Learn more . . .

 

 

The Latest on: Robot-based Autonomous Refuse
  • Meet Mr. Robot, your friendly neighborhood garbage man
    on November 17, 2019 at 4:00 pm

    This particular project is designed to create a robotic trashman. The goal of Robot-based Autonomous Refuse handling or ROAR for short, is to “introduce a robot that, with the help of instructions ...

  • Volvo shows off a prototype of its self-emptying trash can
    on February 27, 2016 at 5:01 pm

    Volvo's ROAR (RObot based Autonomous Refuse handling) project has moved into prototype testing, the company announced earlier this week. The ROAR system is designed to autonomously empty trash cans ...

  • Volvo’s drone-guided garbage-lifting robot is now a working prototype
    on February 27, 2016 at 3:00 am

    The project is dubbed “Roary,” for Robot-based Autonomous Refuse handling, and also involves waste recycling company Renova, as well as universities in Sweden and the US. In the last four months, the ...

  • Watch this drone-guided robot empty the trash
    on February 24, 2016 at 11:05 pm

    The robot was created under Volvo's Robot-based Autonomous Refuse handling (ROAR) project, which the Swedish automaker announced last year. Students from three universities collaborated with Volvo and ...

  • Volvo's robot refuse collectors ROAR into life
    on February 24, 2016 at 4:00 pm

    In both an impressive display of innovative technology and a glimpse of a future in which humans could be redundant, Volvo has shown off its Robot-based Autonomous Refuse handling (ROAR) project. The ...

  • Robots to help with trash collection
    on September 21, 2015 at 5:00 pm

    “It is done without waking the sleeping families and without heavy lifting for the refuse truck’s driver,” said Volvo. The project is called ROAR – Robot-based Autonomous Refuse handling – where the ...

  • Volvo's robots will quietly pick up and empty your garbage bin
    on September 21, 2015 at 1:25 pm

    Volvo has announced a collaboration with companies and universities in Sweden and the US on ROAR (Robot-based Autonomous Refuse handling). The project aims to build robots that will assist garbage ...

  • Volvo imagines robots will someday collect your trash
    on September 21, 2015 at 1:15 pm

    That is, if Volvo's new ROAR project comes to fruition. Robot-based Autonomous Refuse handling, or ROAR, is a joint venture among Volvo, Chalmers University of Technology, Mälardalen University of ...

  • Volvo wants to replace garbage men with trash-collecting robots
    on September 17, 2015 at 4:19 pm

    Not to worry, though, because Volvo’s venture — which stands for Robot-based Autonomous Refuse handling — seems much closer to Wall-E than Blade Runner. The joint project involves Volvo, the Chalmers ...

  • Volvo’s ROAR robots will take out the trash for you
    on September 16, 2015 at 3:47 am

    Robot-based Autonomous Refuse Handling (ROAR) is only a concept for now but Volvo seems serious about the idea. According to concept drawings, a trash truck could be equipped with several Wall-E ...

via  Bing News

 

Starfish-killing robot close to trials on Great Barrier Reef

The Cotsbot is designed to autonomously search for crown-of-thorns starfish and destroy them – – Queensland University of Technology

 

An autonomous starfish-killing robot is close to being ready for trials on the Great Barrier Reef, researchers say.

Crown-of-thorns starfish have have been described as a significant threat to coral.

The Cotsbot robot, which has a vision system, is designed to seek out starfish and give them a lethal injection.

After it eradicates the bulk of starfish in a given area, human divers can move in and mop up the survivors.

Field trials of the robot have begun in Moreton Bay in Brisbane to refine its navigation system, Queensland University of of Technology researcher Matthew Dunbabin told the BBC.

There are no crown-of-thorns starfish in Moreton Bay but once the navigation has been refined, the robot will be unleashed on the reef.

“Later this month we begin deploying the robot in the Great Barrier Reef to evaluate our state-of-the-art vision-based crown-of-thorns starfish (COTS) detection system,” he said.

“Over the next five months we plan to progressively increase the level of autonomy the robot is allowed, leading to autonomous detection and injection of the starfish.”

Read more: Starfish-killing robot close to trials on Great Barrier Reef

 

 

The Latest on: Starfish-killing robot

via  Bing News

 

Robots can recover from damage in minutes

This is one of the robots introduced in the paper'Robots that can adapt like animals.' CREDIT Antoine Cully

This is one of the robots introduced in the paper ‘Robots that can adapt like animals.’
CREDIT
Antoine Cully

Robots will one day provide tremendous benefits to society, such as in search and rescue missions and putting out forest fires — but not until they can learn to keep working if they become damaged.

A new paper in the journal Nature, called “Robots That Can Adapt Like Animals,” shows how to make robots automatically recover from injury in less than two minutes.

A video of the work shows a six-legged robot that adapts to keep walking even if two of its legs are broken. It also shows a robotic arm that learned how to correctly place an object even with several broken motors.

Antoine Cully and Jean-Baptiste Mouret, from the Pierre and Marie Curie University in France, led the work in collaboration with Jeff Clune (University of Wyoming) and Danesh Tarapore (Pierre and Marie Curie University).

In contrast to today’s robots, animals exhibit an amazing ability to adapt to injury. There are many three-legged dogs that can catch Frisbees, for example, and if your ankle is sprained, you quickly figure out a way to walk despite the injury. The scientists took inspiration from these biological strategies.

“When injured, animals do not start learning from scratch,” senior author Jean-Baptiste Mouret said. “Instead, they have intuitions about different ways to behave. These intuitions allow them to intelligently select a few, different behaviors to try out and, after these tests, they choose one that works in spite of the injury. We made robots that can do the same.”

Before it is deployed, the robot uses a computer simulation of itself to create a detailed map of the space of high-performing behaviors. This map represents the robot’s “intuitions” about different behaviors it can perform and their predicted value. If the robot is damaged, it uses these intuitions to guide a learning algorithm that conducts experiments to rapidly discover a compensatory behavior that works despite the damage. The new algorithm is called “Intelligent Trial and Error.”

“Once damaged, the robot becomes like a scientist,” explains lead author Antoine Cully. “It has prior expectations about different behaviors that might work, and begins testing them. However, these predictions come from the simulated, undamaged robot. It has to find out which of them work, not only in reality, but given the damage.

“Each behavior it tries is like an experiment and, if one behavior doesn’t work, the robot is smart enough to rule out that entire type of behavior and try a new type,” Cully continues. “For example, if walking, mostly on its hind legs, does not work well, it will next try walking mostly on its front legs. What’s surprising is how quickly it can learn a new way to walk. It’s amazing to watch a robot go from crippled and flailing around to efficiently limping away in about two minutes.”

The same Intelligent Trial and Error algorithm allows robots to adapt to unforeseen situations, including adapting to new environments and inventing new behaviors.

Jeff Clune explains that “technically, Intelligent Trial and Error involves two steps: (1) creating the behavior-performance map, and (2) adapting to an unforeseen situation.”

The map in the first step is created with a new type of evolutionary algorithm called MAP-Elites. Evolutionary algorithms simulate Darwinian evolution by hosting “survival of the fittest” competitions in computer simulations to evolve artificially intelligent robots. The adaptation in the second step involves a “Bayesian optimization” algorithm that takes advantage of the prior knowledge provided by the map to efficiently search for a behavior that works despite the damage.

“We performed experiments that show that the most important component of Intelligent Trial and Error is creating and harnessing the prior knowledge contained in the map,” Clune says.

This new technique will help develop more robust, effective, autonomous robots. Danesh Tarapore provides some examples.

“It could enable the creation of robots that can help rescuers without requiring their continuous attention,” he says. “It also makes easier the creation of personal robotic assistants that can continue to be helpful even when a part is broken.”

Read more: Robots can recover from damage in minutes

 

The Latest on: Autonomous robots

via  Bing News