In The Jetsons, the helper robot of the future handles all manner of chores. Rosie can do the laundry, pick up the groceries, and keep Elroy out of trouble.
In today’s reality, we have all kinds of artificial intelligences (AIs) at work for us—scouring the web for information, diagnosing car trouble, even performing surgeries. But no one of these specialized machines could perform all those tasks, or any variety of tasks. In fact, even one of our everyday errands would pose a challenge for a traditional robot. Rosie would be stymied by a stray shopping cart in her path, and she wouldn’t know what to do if Jane’s favorite brand of margarine were out of stock.
But Massimiliano “Max” Versace, a College of Arts & Sciences research assistant professor and director of BU’s Neuromorphics Laboratory, aims to fix that. His team is building the brain of a versatile, general-purpose robot—maybe not a humanoid, wisecracking helper, but, let’s say, a really smart dog. And with a grant from NASA, that pup may soon be prowling other planets.
Versace (GRS’07) is working on the cutting edge of a convergence of neuroscience, computer processing, and other disciplines that promises to yield a better robot, one with a “brain” modeled after that of a mammal. He believes conventional robots are hamstrung by their basic architecture, which has changed little since the 1960s. By necessity, even a powerful supercomputer’s processing unit is located apart from its memory stores. The tiny delay as data travels between them is not noticeable because a typical AI today is devoted only to a single task or a narrow set of tasks.
But those delays would quickly multiply if a robot were asked to step outside that narrow field—adding car parts on an assembly line or answering questions onJeopardy!—and into an unpredictable situation, such as exploring the ocean floor or caring for an elderly person. To prepare a robot for every possibility in that broader role, its programmers would have to add so many lines of code that the machine would need as much power as used by the entire Charles River Campus.
The brain of an ordinary rat, on the other hand, runs on the energy equivalent of a Christmas-tree bulb, Versace and colleague Ben Chandler (GRS’14) write in an article in IEEE Spectrum, a publication of the Institute of Electrical and Electronics Engineers, yet the rodent can successfully explore unfamiliar tunnels, avoid mousetraps, follow a food aroma coming from an unexpected source—all things that might befuddle a robot.
It’s that ability to learn and adapt that Versace is working to replicate in an artificial brain. To do it, he’s made use of a breakthrough electrical component designed by Hewlett-Packard, called a memristor. Versace and his team have assembled networks of these microscopic devices to mimic the brain’s neurons and synapses, saving a massive amount of energy while allowing the storage and processing of information to occur simultaneously, as they do in our mammalian heads.
In the lab’s first series of experiments, in 2011, the BU team built a rodent-size brain and let it loose in a virtual tub of water. With training, rather than explicit programming, the “animat” eventually figured out on its own how to find dry ground.
Once Versace and colleagues demonstrated that success, the National Aeronautics and Space Administration came calling, and tapped the Neuromorphics Lab for two high-altitude projects.
In the first, the researchers have been charged with designing a Mars explorer that will operate autonomously, navigating and collecting information using passive rather than active sensors.
“An active sensor is, for instance, a laser range finder, which shoots laser beams to estimate the distance from the robot to a wall or object, or even to estimate object size,” explains Versace. “Biology does this task with a passive sensor, the eye, which absorbs energy—light—from the environment rather than emitting it. An active sensor means spending more money and having more weight to carry—sensor plus battery. This is just one example in a trend that sees traditional robots burning tons of energy to do tasks that in biology take a few calories.”
Last month, after repeated tweaks, the lab’s virtual rover, outfitted with biological-eye-like passive sensors, successfully learned the spatial layout of, and identified science targets within, a highly realistic virtual Martian surface. Versace and colleagues are now testing the system in a real-life metal-and-plastic robot in a physical “Mars yard” they built in the Neuromorphics Lab.
The lab’s second NASA project also marshals mammal-style sight, but for a use closer to home. By fall 2015, the Federal Aviation Administration will fully open U.S. airspace to unmanned aerial vehicles (UAVs)—with the common-sense provision that the machines must be at least as adept as human pilots at sensing and avoiding oncoming objects.
The Latest Streaming News: Teaching computers how to learn updated minute-by-minute
Bookmark this page and come back often