Tiny robotic vessels powered by acoustic waves maneuver through cellular landscapes

A schematic showing the structure the microrockets, which are 3D printed and contain a trapped air bubble (top left), and how they look under scanning electron microscopy (top right). The acoustic fluidic chamber where they can be driven in three dimensions is shown on the bottom. (Image: Liqiang Ren)

A new study shows how tiny robotic vessels powered by acoustic waves and an on-board bubble motor can be maneuvered through cellular landscapes using magnets.

A new study from the lab of Thomas Mallouk shows how microscale “rockets,” powered by acoustic waves and an onboard bubble motor, can be driven through 3D landscapes of cells and particles using magnets. The research was a collaboration between researchers at Penn and the University of San Diego, the Harbin Institute of Technology in Shenzhen, and Pennsylvania State University, where the study was initially conducted, and was published in Science Advances.

The origin story of the tiny rockets began with a fundamental scientific question: Could scientists design nano- and microscale vessels that use chemicals for fuel to travel through the human body? Fifteen years of research by Mallouk and others showed that the short answer was “yes,” but researchers faced significant barriers for using these vessels in biomedical applications because the chemicals they used for fuel, like hydrogen peroxide, were toxic.

An “accidental” discovery led Mallouk and his group to focus on the use of a completely different type of fuel: sound waves. While trying to move their rockets with acoustic levitation, a process used to lift particles off a microscope slide with high-frequency sound waves, the group was surprised to find that ultrasound made the robots move at very fast speeds. Mallouk and his team decided to investigate this phenomenon further to see if they could use high-frequency sound waves to power their tiny vessels.

The group’s latest paper details the design of the microscale rockets, resembling a round-bottomed cup 10 microns in length and 5 microns wide, or about the size of a particle of dust. The rounded cups are 3D printed using laser lithography and contain an outer layer of gold and inner layers of nickel and a polymer. Treatment with a hydrophobic chemical after the gold is cast causes an air bubble to form and become trapped inside the rocket’s cavity.

In the presence of ultrasound waves, the bubble inside the rocket is excited by high-frequency oscillation at the water-air interface, which turns the bubble into an onboard motor. The rocket can then be steered using an external magnetic field. Each individual rocket has its own resonant frequency, meaning that each member of a fleet can be driven independently from the others. The tiny rockets are also incredibly adept, able to travel up microscopic staircases and swim freely in three dimensions with the help of special fins.

One of the rocket’s most unique features is their ability to move other particles and cells with sharp precision, even in crowded environments. The robotic vessels can either push particles in the desired direction or use a “tractor beam” approach to pull objects with an attractive force. Mallouk says the ability to push objects without disturbing the environment “wasn’t available on a larger scale,” adding that the tractor beam approach used by larger-sized vessels isn’t as good at precise movements. “There’s a lot of control you can do at this length scale,” he adds.

At this particular size, the rockets are large enough not to be impacted by Brownian motion, the random and erratic movements experienced by particles in the nanometer size range, but are small enough to move objects without disturbing the environment around them. “At this particular length scale, we’re right at the crossover point between when the power is enough to affect other particles,” says Mallouk.

By increasing or decreasing the amount of acoustic “fuel” the researchers provide the rockets, they can also control the speed of the tiny vessels. “If I want it to go slow, I can turn the power down, and if I want it to go really fast, I can turn the power up,” explains Jeff McNeill, a graduate student who works on nano- and microscale motor projects. “That’s a really useful tool.”

Mallouk and his lab are already exploring a number of possible areas of further research, including ways to actuate the rockets with light, and making even smaller rockets that would be faster and stronger for their size. Future collaborations with engineers and roboticists at Penn, including Dan HammerMarc MiskinVijay KumarJames Pikul, and Kathleen Stebe, could help make the rockets “smart” by allowing them to outfit the vessels with computer chips and sensors to give them autonomy and intelligence.

As the group considers the micro-rocket’s broad medical potential from medical imaging to nano-robotics Mallouk says, “We’d like to have controllable robots that can do tasks inside the body: Deliver medicine, rotor rooter arteries, diagnostic snooping.”

Learn more: Microscale rockets can travel through cellular landscapes

 

The Latest on: Nanorobotics

via Google News

 

The Latest on: Nanorobotics

via  Bing News

 

Do robots need to know the reason why they are doing a job?

via University of Birmingham

Robots need to know the reason why they are doing a job if they are to effectively and safely work alongside people in the near future. In simple terms, this means machines need to understand motive the way humans do, and not just perform tasks blindly, without context.

According to a new article by the National Centre for Nuclear Robotics, based at the University of Birmingham, this could herald a profound change for the world of robotics, but one that is necessary.

Lead author Dr Valerio Ortenzi, at the University of Birmingham, argues the shift in thinking will be necessary as economies embrace automation, connectivity and digitisation (‘Industry 4.0’) and levels of human – robot interaction, whether in factories or homes, increase dramatically.

The paper, published in Nature Machine Intelligence, explores the issue of robots using objects. ‘Grasping’ is an action perfected long ago in nature but one which represents the cutting-edge of robotics research.

Most factory-based machines are ‘dumb’, blindly picking up familiar objects that appear in pre-determined places at just the right moment.  Getting a machine to pick up unfamiliar objects,randomly presented, requires the seamless interaction of multiple, complex technologies. These include vision systems and advanced AI so the machine can see the target and determine its properties (for example, is it rigid or flexible?); and potentially, sensors in the gripper are required so the robot does not inadvertently crush an object it has been told to pick up.

Even when all this is accomplished, researchers in the National Centre for Nuclear Robotics highlighted a fundamental issue: what has traditionally counted as a ‘successful’ grasp for a robot might actually be a real-world failure, because the machine does not take into account what the goal is and whyit is picking an object up.

The paper cites the example of a robot in a factory picking up an object for delivery to a customer. It successfully executes the task, holding the package securely without causing damage.  Unfortunately, the robot’s gripper obscures a crucial barcode, which means the object can’t be tracked and the firm has no idea if the item has been picked up or not; the whole delivery system breaks down because the robot does not know the consequences of holding a box the wrong way.

Dr Ortenzi gives other examples, involving robots working alongside people.

“Imagine asking a robot to pass you a screwdriver in a workshop. Based on current conventions the best way for a robot to pick up the tool is by the handle,” he said. “Unfortunately, that could mean that a hugely powerful machine then thrusts a potentially lethal blade towards you, at speed. Instead, the robot needs to know what the end goal is, i.e.,to pass the screwdriver safely to its human colleague, in order to rethink its actions.

“Another scenario envisages a robot passing a glass of water to a resident in a care home. It must ensure that it doesn’t drop the glass but also that water doesn’t spill over the recipient during the act of passing, or that the glass is presented in such a way that the person can take hold of it.

“What is obvious to humans has to be programmed into a machine and this requires a profoundly different approach. The traditional metrics used by researchers, over the past twenty years, to assess robotic manipulation, are not sufficient. In the most practical sense, robots need a new philosophy to get a grip.”

Learn more: Robots Need a New Philosophy to Get a Grip

 

The Latest on: Human-robot interaction

via Google News

 

The Latest on: Human-robot interaction

via  Bing News

 

Robot blood enables robots for sophisticated long-duration tasks

An aquatic soft robot, inspired by a lionfish and designed by James Pikul, former postdoctoral researcher in the lab of Rob Shepherd, assistant professor of mechanical and aerospace engineering.

Untethered robots suffer from a stamina problem. A possible solution: a circulating liquid – “robot blood” – to store energy and power its applications for sophisticated, long-duration tasks.

Humans and other complex organisms manage life through integrated systems. Humans store energy in fat reserves spread across the body, and an intricate circulatory system transports oxygen and nutrients to power trillions of cells.

But crack open the hood of an untethered robot and things are much more segmented: Over here is the solid battery and over there are the motors, with cooling systems and other components scattered throughout.

Cornell researchers have created a synthetic vascular system capable of pumping an energy-dense hydraulic liquid that stores energy, transmits force, operates appendages and provides structure, all in an integrated design.

“In nature we see how long organisms can operate while doing sophisticated tasks. Robots can’t perform similar feats for very long,” said Rob Shepherd, associate professor of mechanical and aerospace engineering. “Our bio-inspired approach can dramatically increase the system’s energy density while allowing soft robots to remain mobile for far longer.”

Shepherd, director of the Organic Robotics Lab, is senior author of “Electrolytic Vascular Systems for Energy Dense Robots,” which published June 19 in Nature. Doctoral student Cameron Aubin is lead author.

Engineers rely on lithium-ion batteries for their dense energy-storage potential. But solid batteries are bulky and present design constraints. Alternatively, redox flow batteries (RFB) rely on a solid anode and highly soluble catholyte to function. The dissolved components store energy until it is released in a chemical reduction and oxidation, or redox, reaction.

Soft robots are mostly fluid – up to around 90% fluid by volume, and many times use hydraulic liquid. Using that fluid to store energy offers the possibility of increased energy density without added weight.

The researchers tested the concept by creating an aquatic soft robot inspired by a lionfish, designed by co-author James Pikul, a former postdoctoral researcher now an assistant professor at the University of Pennsylvania. Lionfish use undulating fanlike fins to glide through coral-reef environments (In one sacrifice to verisimilitude, the researchers opted not to add venomous fins like the robots’ living counterparts).

Silicone skin on the outside and flexible electrodes and an ion separator membrane within allow the robot to bend and flex. Interconnected zinc-iodide flow cell batteries power onboard pumps and electronics through electrochemical reactions. The researchers achieved energy density equal to about half that of a Tesla Model S lithium-ion battery.

The robot swims using power transmitted to the fins from the pumping of the flow cell battery. The initial design provided enough power to swim upstream for more than 36 hours.

Current RFB technology is typically used in large, stationary applications, such as storing energy from wind and solar sources. RFB design has historically suffered from low power density and operating voltage. The researchers overcame those issues by wiring the fan battery cells in series, and maximized power density by distributing electrodes throughout the fin areas.

“We want to take as many components in a robot and turn them into the energy system. If you have hydraulic liquids in your robot already, then you can tap into large stores of energy and give robots increased freedom to operate autonomously,” Shepherd said.

Underwater soft robots offer tantalizing possibilities for research and exploration. Since aquatic soft robots are supported by buoyancy, they don’t require an exoskeleton or endoskeleton to maintain structure. By designing power sources that give robots the ability to function for longer stretches of time, Shepherd thinks autonomous robots could soon be roaming Earth’s oceans on vital scientific missions and for delicate environmental tasks like sampling coral reefs. These devices could also be sent to extraterrestrial worlds for underwater reconnaissance missions.

Learn more: Robot circulatory system powers possibilities

 

The Latest on: Robot blood

via Google News

 

The Latest on: Robot blood

via  Bing News

 

Learning signatures of the human grasp could help robots and prosthetics get a real grip

MIT researchers have developed a low-cost, sensor-packed glove that captures pressure signals as humans interact with objects. The glove can be used to create high-resolution tactile datasets that robots can leverage to better identify, weigh, and manipulate objects.
Image: Courtesy of the researchers

Signals help neural network identify objects by touch; system could aid robotics and prosthetics design

Wearing a sensor-packed glove while handling a variety of objects, MIT researchers have compiled a massive dataset that enables an AI system to recognize objects through touch alone. The information could be leveraged to help robots identify and manipulate objects, and may aid in prosthetics design.

The researchers developed a low-cost knitted glove, called “scalable tactile glove” (STAG), equipped with about 550 tiny sensors across nearly the entire hand. Each sensor captures pressure signals as humans interact with objects in various ways. A neural network processes the signals to “learn” a dataset of pressure-signal patterns related to specific objects. Then, the system uses that dataset to classify the objects and predict their weights by feel alone, with no visual input needed.

In a paper published today in Nature, the researchers describe a dataset they compiled using STAG for 26 common objects — including a soda can, scissors, tennis ball, spoon, pen, and mug. Using the dataset, the system predicted the objects’ identities with up to 76 percent accuracy. The system can also predict the correct weights of most objects within about 60 grams.

Similar sensor-based gloves used today run thousands of dollars and often contain only around 50 sensors that capture less information. Even though STAG produces very high-resolution data, it’s made from commercially available materials totaling around $10.

The tactile sensing system could be used in combination with traditional computer vision and image-based datasets to give robots a more human-like understanding of interacting with objects.

“Humans can identify and handle objects well because we have tactile feedback. As we touch objects, we feel around and realize what they are. Robots don’t have that rich feedback,” says Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We’ve always wanted robots to do what humans can do, like doing the dishes or other chores. If you want robots to do these things, they must be able to manipulate objects really well.”

The researchers also used the dataset to measure the cooperation between regions of the hand during object interactions. For example, when someone uses the middle joint of their index finger, they rarely use their thumb. But the tips of the index and middle fingers always correspond to thumb usage. “We quantifiably show, for the first time, that, if I’m using one part of my hand, how likely I am to use another part of my hand,” he says.

Prosthetics manufacturers can potentially use information to, say, choose optimal spots for placing pressure sensors and help customize prosthetics to the tasks and objects people regularly interact with.

Joining Sundaram on the paper are: CSAIL postdocs Petr Kellnhofer and Jun-Yan Zhu; CSAIL graduate student Yunzhu Li; Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab; and Wojciech Matusik, an associate professor in electrical engineering and computer science and head of the Computational Fabrication group.

STAG is laminated with an electrically conductive polymer that changes resistance to applied pressure. The researchers sewed conductive threads through holes in the conductive polymer film, from fingertips to the base of the palm. The threads overlap in a way that turns them into pressure sensors. When someone wearing the glove feels, lifts, holds, and drops an object, the sensors record the pressure at each point.

The threads connect from the glove to an external circuit that translates the pressure data into “tactile maps,” which are essentially brief videos of dots growing and shrinking across a graphic of a hand. The dots represent the location of pressure points, and their size represents the force — the bigger the dot, the greater the pressure.

From those maps, the researchers compiled a dataset of about 135,000 video frames from interactions with 26 objects. Those frames can be used by a neural network to predict the identity and weight of objects, and provide insights about the human grasp.

To identify objects, the researchers designed a convolutional neural network (CNN), which is usually used to classify images, to associate specific pressure patterns with specific objects. But the trick was choosing frames from different types of grasps to get a full picture of the object.

The idea was to mimic the way humans can hold an object in a few different ways in order to recognize it, without using their eyesight. Similarly, the researchers’ CNN chooses up to eight semirandom frames from the video that represent the most dissimilar grasps — say, holding a mug from the bottom, top, and handle.

But the CNN can’t just choose random frames from the thousands in each video, or it probably won’t choose distinct grips. Instead, it groups similar frames together, resulting in distinct clusters corresponding to unique grasps. Then, it pulls one frame from each of those clusters, ensuring it has a representative sample. Then the CNN uses the contact patterns it learned in training to predict an object classification from the chosen frames.

“We want to maximize the variation between the frames to give the best possible input to our network,” Kellnhofer says. “All frames inside a single cluster should have a similar signature that represent the similar ways of grasping the object. Sampling from multiple clusters simulates a human interactively trying to find different grasps while exploring an object.”

For weight estimation, the researchers built a separate dataset of around 11,600 frames from tactile maps of objects being picked up by finger and thumb, held, and dropped. Notably, the CNN wasn’t trained on any frames it was tested on, meaning it couldn’t learn to just associate weight with an object. In testing, a single frame was inputted into the CNN. Essentially, the CNN picks out the pressure around the hand caused by the object’s weight, and ignores pressure caused by other factors, such as hand positioning to prevent the object from slipping. Then it calculates the weight based on the appropriate pressures.

The system could be combined with the sensors already on robot joints that measure torque and force to help them better predict object weight. “Joints are important for predicting weight, but there are also important components of weight from fingertips and the palm that we capture,” Sundaram says.

Learn more: Sensor-packed glove learns signatures of the human grasp

 

The Latest on: Scalable tactile glove

via Google News

 

The Latest on: Scalable tactile glove

via  Bing News

 

Funding: To create an autonomous robotic trauma care system

via CMU

The University of Pittsburgh School of Medicine and Carnegie Mellon University each have been awarded four-year contracts totaling more than $7.2 million from the U.S. Department of Defense to create an autonomous trauma care system that fits in a backpack and can treat and stabilize soldiers injured in remote locations.

The goal of “TRAuma Care In a Rucksack: TRACIR” is to develop artificial intelligence (AI) technologies enabling medical interventions that extend the “golden hour” for treating combat casualties and ensure an injured person’s survival for long medical evacuations.

A multi-disciplinary team of Pitt researchers and clinicians from emergency medicine, surgery, critical care and pulmonary fields will provide a wealth of real-world trauma data and medical algorithms that CMU roboticists and computer scientists will incorporate in the creation of a “hard and soft robotic suit” into which an injured person can be placed. Monitors embedded in the suit will assess the injury, and AI algorithms will guide the appropriate critical care interventions and robotically apply stabilizing treatments, such as intravenous fluids and medications.

Ron Poropatich, M.D., retired U.S. Army colonel, director of Pitt’s Center for Military Medicine Research and professor in Pitt’s Division of Pulmonary, Allergy and Critical Care Medicine, is overall principal investigator on the $3.71 million Pitt contract, with Michael R. Pinsky, M.D., professor in Pitt’s Department of Critical Care Medicine, as its scientific principal investigator. Artur Dubrawski, Ph.D., research professor at CMU’s Robotics Institute, is principal investigator on the $3.5 million CMU contract.

“Battlefields are becoming increasingly remote, making medical evacuations more difficult,” said Poropatich. “By fusing data captured from multiple sensors and applying machine learning, we are developing more predictive cardio-pulmonary resuscitation opportunities, which hopefully will conserve an injured soldier’s strength. Our goal with TRACIR is to treat and stabilize soldiers in the battlefield, even during periods of prolonged field care, when evacuation is not possible.”

Much technology still needs to be developed to enable robots to reliably and safely perform tasks, such as inserting IV needles or placing a chest tube in the field, Dubrawski said. Initially, the research will be “a series of baby steps,” demonstrating the practicality of individual components the system will eventually require.

“Everybody has a slightly different vision of what the final system will look like,” Dubrawski added. “But we see this as being an autonomous or nearly autonomous system – a backpack containing an inflatable vest or perhaps a collapsed stretcher that you might toss toward a wounded soldier. It would then open up, inflate, position itself and begin stabilizing the patient. Whatever human assistance it might need could be provided by someone without medical training.”

With a digital library of detailed physiologic data collected from over 5,000 UPMC trauma patients, Pinsky and Dubrawski previously created algorithms that could allow a computer program to “learn” the signals that an injured patient’s health is deteriorating before damage is irreversible and tell the robotic system to administer the best treatments and therapies to save that person’s life.

“Pittsburgh has the three components you need for a project like this – world-class expertise in critical care medicine, artificial intelligence and robotics,” Dubrawski said. “That’s why Pittsburgh is unique and is the one place for this project.”

While the immediate goal of the project is to carry forward the U.S. military’s principle of “leave no man behind,” and treat soldiers on the battlefield, there are numerous potential civilian applications, said Poropatich.

“TRACIR could be deployed by drone to hikers or mountain climbers injured in the wilderness; it could be used by people in submarines or boats; it could give trauma care capabilities to rural health clinics or be used by aid workers responding to natural disasters,” he said. “And, someday, it could even be used by astronauts on Mars.”

Learn more: Pitt, CMU Receive Department of Defense Contracts To Create Autonomous Robotic Trauma Care System

 

The Latest on: Autonomous trauma care system

via Google News

 

The Latest on: Autonomous trauma care system

via  Bing News