Maryland research could fundamentally alter and improve the basic artificial intelligence task of sensorimotor representation—how agents like robots translate what they sense into what they do.
The Houston Astros’ José Altuve steps up to the plate on a 3-2 count, studies the pitcher and the situation, gets the go-ahead from third base, tracks the ball’s release, swings… and gets a single up the middle. Just another trip to the plate for the three-time American League batting champion.
Could a robot get a hit in the same situation? Not likely.
Altuve has honed natural reflexes, years of experience, knowledge of the pitcher’s tendencies, and an understanding of the trajectories of various pitches. What he sees, hears and feels seamlessly combines with his brain and muscle memory to time the swing that produces the hit. The robot, on the other hand, needs to use a linkage system to slowly coordinate data from its sensors with its motor capabilities. And it can’t remember a thing. Strike three!
But there may be hope for the robot. A paper by University of Maryland researchers just published in the journal Science Robotics introduces a new way of combining perception and motor commands using the so-called hyperdimensional computing theory, which could fundamentally alter and improve the basic artificial intelligence (AI) task of sensorimotor representation—how agents like robots translate what they sense into what they do.
“Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception” was written by Computer Science Ph.D. students Anton Mitrokhin and Peter Sutor, Jr.; Cornelia Fermüller, an associate research scientist with the University of Maryland Institute for Advanced Computer Studies; and Computer Science Professor Yiannis Aloimonos. Mitrokhin and Sutor are advised by Aloimonos.
Integration is the most important challenge facing the robotics field. A robot’s sensors and the actuators that move it are separate systems, linked together by a central learning mechanism that infers a needed action given sensor data, or vice versa.
The cumbersome three-part AI system—each part speaking its own language—is a slow way to get robots to accomplish sensorimotor tasks. The next step in robotics will be to integrate a robot’s perceptions with its motor capabilities. This fusion, known as “active perception,” would provide a more efficient and faster way for the robot to complete tasks.
In the authors’ new computing theory, a robot’s operating system would be based on hyperdimensional binary vectors (HBVs), which exist in a sparse and extremely high-dimensional space. HBVs can represent disparate discrete things—for example, a single image, a concept, a sound or an instruction; sequences made up of discrete things; and groupings of discrete things and sequences. They can account for all these types of information in a meaningfully constructed way, binding each modality together in long vectors of 1s and 0s with equal dimension. In this system, action possibilities, sensory input and other information occupy the same space, are in the same language, and are fused, creating a kind of memory for the robot.
The Science Robotics paper marks the first time that perception and action have been integrated.
A hyperdimensional framework can turn any sequence of “instants” into new HBVs, and group existing HBVs together, all in the same vector length. This is a natural way to create semantically significant and informed “memories.” The encoding of more and more information in turn leads to “history” vectors and the ability to remember. Signals become vectors, indexing translates to memory, and learning happens through clustering.
The robot’s memories of what it has sensed and done in the past could lead it to expect future perception and influence its future actions. This active perception would enable the robot to become more autonomous and better able to complete tasks.
“An active perceiver knows why it wishes to sense, then chooses what to perceive, and determines how, when and where to achieve the perception,” says Aloimonos. “It selects and fixates on scenes, moments in time, and episodes. Then it aligns its mechanisms, sensors, and other components to act on what it wants to see, and selects viewpoints from which to best capture what it intends.”
“Our hyperdimensional framework can address each of these goals.”
Applications of the Maryland research could extend far beyond robotics. The ultimate goal is to be able to do AI itself in a fundamentally different way: from concepts to signals to language. Hyperdimensional computing could provide a faster and more efficient alternative model to the iterative neural net and deep learning AI methods currently used in computing applications such as data mining, visual recognition and translating images to text.
“Neural network-based AI methods are big and slow, because they are not able to remember,” says Mitrokhin. “Our hyperdimensional theory method can create memories, which will require a lot less computation, and should make such tasks much faster and more efficient.”
The Latest on: Hyperdimensional computing
via Google News
The Latest on: Hyperdimensional computing
- Can We Give AI the Ability to Form Memories? The Basics of Hyperdimensional Computing on May 29, 2019 at 5:56 am
Is hyperdimensional computing the future of AI? Learn about a theory that could revolutionize machine vision. It is mostly described as a way to enable AI systems to retain memory, instead of ... […]
- Hyperdimensional computing theory could lead to AI with memories and reflexes on May 17, 2019 at 1:04 pm
A team of scientists from the University of Maryland recently came up with a take on the hyperdimensional computing theory that could give robots memories and reflexes. This could break the stalemate ... […]
- Helping robots remember: Hyperdimensional computing theory could change the way AI works on May 15, 2019 at 2:52 pm
A new article introduces a new way of combining perception and motor commands using the so-called hyperdimensional computing theory, which could fundamentally alter and improve the basic artificial ... […]
- Helping robots remember: Hyperdimensional computing theory could change the way AI works on May 15, 2019 at 12:35 pm
The Houston Astros' José Altuve steps up to the plate on a 3-2 count, studies the pitcher and the situation, gets the go-ahead from third base, tracks the ball's release, swings ... and gets a single ... […]
- Shedding Light on the “Grand Débat” on April 11, 2019 at 7:58 am
“Near-lossless Binarization of Word Embeddings.” arXiv preprint arXiv:1803.09065 (2018).  Pentti Kanerva “Hyperdimensional computing: An introduction to computing in distributed representation with ... […]
- 4 Strange New Ways to Compute on November 29, 2017 at 7:07 am
There were also some cool variations on classics such as reversible computing and neuromorphic chips ... of this post appears in the January 2018 print issue as “4 Strange New Ways to Compute.” ... […]
- NVMW 2017 — Brain-Inspired Memory “At the Center of the Universe” on May 16, 2017 at 10:00 am
Wong further explained that this type of “hyperdimensional computing” could ultimately combine biologically-inspired algorithms and machine-learning algorithms with neuromorphic hardware and ... […]
- Memories Of The Future on December 7, 2016 at 4:00 pm
Another group from Stanford and Berkeley was using a 3D vertically oriented RRAM device to create a hyperdimensional computing environment that could recognize words in 21 languages. Ferro-electric ... […]
- Quantum Jewels And Hypercubes: The Wild Shapes Of Hyperdimensional Geometry [PICTURES] on September 20, 2013 at 5:45 pm
If you can ever peer beyond three dimensions, you will be able to see wondrous things: hyperdimensional jewels ... formulas thousands of terms long can now be described by computing the volume of the ... […]
via Bing News