Laser-based weed control can eliminate herbicides

vua Phys.org

A robot automatically identifies weeds in a field and combats them with a short laser pulse. Sustainable agriculture, which avoids the use of herbicides as far as possible, could benefit from this smart idea. Dr. Julio Pastrana and Tim Wigbels from the Institute of Geodesy and Geoinformation at the University of Bonn are convinced of this. With an EXIST Business Start-up Grant from the Federal Ministry for Economic Affairs and Energy, the scientists are now driving forward the development of this practical tool for field work.

Those who want a rich harvest need to drive back weeds so that the crops can grow better. In organic agriculture, herbicides are ruled out as they are considered toxic chemicals, and unwanted plants must be laboriously weeded out. If the expectations of Dr. Julio Pastrana and Tim Wigbels are correct, this time-consuming work can soon be taken care of by robots.

Laser-based weed control can eliminate herbicides

The computer scientists in the Photogrammetry Laboratory at the Institute for Geodesy and Geoinformation at the University of Bonn are currently developing a novel system: using cameras on an all-terrain robot vehicle or even a tractor add-on, unwanted wild weeds should be automatically identified in the various crops and combatted in a targeted way. “The robot shoots the leaves of the unwanted plants with short laser pulses, which causes a weakening in their vitality,” reports Dr. Pastrana. “It is thus predicted that we will no longer need to use herbicides on our fields and the environment will be protected,” adds Wigbels.

Before forming the start-up, Dr. Pastrana worked in robotics and researched automated image interpretation techniques with Prof. Cyrill Stachniss from the Institute of Geodesy and Geoinformation at the University of Bonn. Dr. Pastrana completed his doctorate on the detection and classification of weeds with the aid of statistical models at Leibniz Universität Hannover and built an earlier version of the robot there with a colleague. Wigbels studied Computer Engineering at RWTH Aachen University and then worked in software development within a company.

The researchers are now pushing forward their start-up “Escarda Technologies” for one year at the University of Bonn with an EXIST grant from the Federal Ministry for Economic Affairs and Energy. “It is now a case of finding investors and further developing the business plan for the start-up,” says Wigbels. The researchers are also using the funding from the Ministry to buy the parts needed to construct a prototype.

Diverse support from the University of Bonn

Prof. Stachniss is supporting the start-up in various ways: Pastrana and Wigbels can thus use laboratories at the institution and consult with colleagues there. What’s more, Rüdiger Wolf from Technology Transfer at the University of Bonn helped the start-up to submit the application for the EXIST funding. “The advice was very helpful,” says Dr. Pastrana, delighted. Both scientists would also like to participate in the start-up round tables organized by Technology Transfer in order to benefit from the experience of other start-ups. The EXIST grant also enables them to attend training programs to prepare them for the challenges of independence.

“The idea combines innovative robots with a current sustainability topic,” says transfer advisor Rüdiger Wolf. He says the analyses of the market and competition for such an application are sound. Pastrana is convinced of the benefits of the laser-based technique for new agricultural machinery: “Our aim is to contribute to achieving more sustainable agriculture.” At the Bonn Idea Exchange by the Bonn/Rhein-Sieg Chamber of Commerce and Industry, both founders won an award for the best start-up idea.

Learn more: Combating weeds with lasers

 

 

The Latest on: Laser-based weed control
  • What We've Learned From Tesla Autopilot and Self-Driving System Crashes
    on June 11, 2018 at 8:56 am

    Many automakers working to create effective self-driving car technology are banking on the redundancy of multiple types of sensors—cameras, radar and laser-based lidar—to weed out false positives ...

  • Combatting weeds with lasers
    on June 7, 2017 at 6:36 am

    Laser-based weed control can eliminate herbicides The computer scientists in the Photogrammetry Laboratory at the Institute for Geodesy and Geoinformation at the University of Bonn are currently ...

  • Combatting weeds with lasers
    on June 7, 2017 at 5:34 am

    Laser-based weed control can eliminate herbicides The computer scientists in the Photogrammetry Laboratory at the Institute for Geodesy and Geoinformation at the University of Bonn are currently ...

  • Combatting weeds with lasers
    on June 6, 2017 at 5:00 pm

    A robot automatically identifies weeds in a field and combats them with a short laser pulse. Sustainable agriculture, which avoids the use of herbicides as far as possible, could benefit from this ...

  • Rise of the Small Farm Robots, Part 2
    on September 6, 2016 at 4:59 pm

    It drags a metal implement or rotor through the soil, targeting weeds, and also turning the soil. Oz uses laser-based guidance technology to determine ... Worldwide, the global “weed control” market ...

via Google News and Bing News

Robots get a sense of touch

A GelSight sensor attached to a robot’s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot’s camera.
Photo: Robot Locomotion Group at MIT

GelSight technology lets robots gauge objects’ hardness and manipulate small tools

Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface.

Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.

In one paper, Adelson’s group uses the data from the GelSight sensor to enable a robot to judge the hardness of surfaces it touches — a crucial ability if household robots are to handle everyday objects.

In the other, Russ Tedrake’s Robot Locomotion Group at CSAIL uses GelSight sensors to enable a robot to manipulate smaller objects than was previously possible.

The GelSight sensor is, in some ways, a low-tech solution to a difficult problem. It consists of a block of transparent rubber — the “gel” of its name — one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object’s shape.

The metallic paint makes the object’s surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights and a single camera.

“[The system] has colored lights at different angles, and then it has this reflective material, and by looking at the colors, the computer … can figure out the 3-D shape of what that thing is,” explains Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences.

In both sets of experiments, a GelSight sensor was mounted on one side of a robotic gripper, a device somewhat like the head of a pincer, but with flat gripping surfaces rather than pointed tips.

Contact points

For an autonomous robot, gauging objects’ softness or hardness is essential to deciding not only where and how hard to grasp them but how they will behave when moved, stacked, or laid on different surfaces. Tactile sensing could also aid robots in distinguishing objects that look similar.

In previous work, robots have attempted to assess objects’ hardness by laying them on a flat surface and gently poking them to see how much they give. But this is not the chief way in which humans gauge hardness. Rather, our judgments seem to be based on the degree to which the contact area between the object and our fingers changes as we press on it. Softer objects tend to flatten more, increasing the contact area.

The MIT researchers adopted the same approach. Wenzhen Yuan, a graduate student in mechanical engineering and first author on the paper from Adelson’s group, used confectionary molds to create 400 groups of silicone objects, with 16 objects per group. In each group, the objects had the same shapes but different degrees of hardness, which Yuan measured using a standard industrial scale.

Then she pressed a GelSight sensor against each object manually and recorded how the contact pattern changed over time, essentially producing a short movie for each object. To both standardize the data format and keep the size of the data manageable, she extracted five frames from each movie, evenly spaced in time, which described the deformation of the object that was pressed.

Finally, she fed the data to a neural network, which automatically looked for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy. Yuan also conducted a series of informal experiments in which human subjects palpated fruits and vegetables and ranked them according to hardness. In every instance, the GelSight-equipped robot arrived at the same rankings.

Yuan is joined on the paper by her two thesis advisors, Adelson and Mandayam Srinivasan, a senior research scientist in the Department of Mechanical Engineering; Chenzhuo Zhu, an undergraduate from Tsinghua University who visited Adelson’s group last summer; and Andrew Owens, who did his PhD in electrical engineering and computer science at MIT and is now a postdoc at the University of California at Berkeley.

Obstructed views

The paper from the Robot Locomotion Group was born of the group’s experience with the Defense Advanced Research Projects Agency’s Robotics Challenge (DRC), in which academic and industry teams competed to develop control systems that would guide a humanoid robot through a series of tasks related to a hypothetical emergency.

Typically, an autonomous robot will use some kind of computer vision system to guide its manipulation of objects in its environment. Such systems can provide very reliable information about an object’s location — until the robot picks the object up. Especially if the object is small, much of it will be occluded by the robot’s gripper, making location estimation much harder. Thus, at exactly the point at which the robot needs to know the object’s location precisely, its estimate becomes unreliable. This was the problem the MIT team faced during the DRC, when their robot had to pick up and turn on a power drill.

“You can see in our video for the DRC that we spend two or three minutes turning on the drill,” says Greg Izatt, a graduate student in electrical engineering and computer science and first author on the new paper. “It would be so much nicer if we had a live-updating, accurate estimate of where that drill was and where our hands were relative to it.”

That’s why the Robot Locomotion Group turned to GelSight. Izatt and his co-authors — Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering; Adelson; and Geronimo Mirano, another graduate student in Tedrake’s group — designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.

In general, the challenge with such an approach is reconciling the data produced by a vision system with data produced by a tactile sensor. But GelSight is itself camera-based, so its data output is much easier to integrate with visual data than the data from other tactile sensors.

In Izatt’s experiments, a robot with a GelSight-equipped gripper had to grasp a small screwdriver, remove it from a holster, and return it. Of course, the data from the GelSight sensor don’t describe the whole screwdriver, just a small patch of it. But Izatt found that, as long as the vision system’s estimate of the screwdriver’s initial position was accurate to within a few centimeters, his algorithms could deduce which part of the screwdriver the GelSight sensor was touching and thus determine the screwdriver’s position in the robot’s hand.

“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” says Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “For humans, our sense of touch is one of the key enabling factors for our amazing manual dexterity. Current robots lack this type of dexterity and are limited in their ability to react to surface features when manipulating objects. If you imagine fumbling for a light switch in the dark, extracting an object from your pocket, or any of the other numerous things that you can do without even thinking — these all rely on touch sensing.”

“Software is finally catching up with the capabilities of our sensors,” Levine adds. “Machine learning algorithms inspired by innovations in deep learning and computer vision can process the rich sensory data from sensors such as the GelSight to deduce object properties. In the future, we will see these kinds of learning methods incorporated into end-to-end trained manipulation skills, which will make our robots more dexterous and capable, and maybe help us understand something about our own sense of touch and motor control.”

Learn more:Giving robots a sense of touch

 

 

The Latest on: Robot touch

via Google News and Bing News

A robot that can pick up and move unfamiliar, real-world objects with a 99 percent success rate

via UC Berkeley

Grabbing the awkwardly shaped items that people pick up in their day-to-day lives is a slippery task for robots. Irregularly shaped items such as shoes, spray bottles, open boxes, even rubber duckies are easy for people to grab and pick up, but robots struggle with knowing where to apply a grip. In a significant step toward overcoming this problem, roboticists at UC Berkeley have a built a robot that can pick up and move unfamiliar, real-world objects with a 99 percent success rate.

Berkeley professor Ken Goldberg, postdoctoral researcher Jeff Mahler and the Laboratory for Automation Science and Engineering (AUTOLAB) created the robot, called DexNet 2.0. DexNet 2.0’s high grasping success rate means that this technology could soon be applied in industry, with the potential to revolutionize manufacturing and the supply chain.

DexNet 2.0 gained its highly accurate dexterity through a process called deep learning. The researchers built a vast database of three-dimensional shapes — 6.7 million data points in total — that a neural network uses to learn grasps that will pick up and move objects with irregular shapes. The neural network was then connected to a 3D sensor and a robotic arm. When an object is placed in front of DexNet 2.0, it quickly studies the shape and selects a grasp that will successfully pick up and move the object 99 percent of the time. DexNet 2.0 is also three times faster than its previous version.

DexNet 2.0 was featured as the cover story of the latest issues of MIT Technology Review, which called DexNet 2.0 “the most nimble-fingered robot yet.” The complete paper will be published in July.

Read more at the Industrial Engineering and Operations Research department’s website.

Learn more: Meet the most nimble-fingered robot ever built

 

 

The Latest on: Robot dexterity
  • Robots or Cobots: Which to Choose?
    on December 2, 2019 at 10:19 am

    These robots, with their increased flexibility and dexterity, can complete more delicate tasks that conventional robots cannot, such as polishing fragile materials in the production process. Cobots ...

  • Astronaut Luca Controls A Rover On Earth
    on December 2, 2019 at 6:10 am

    ESA astronaut Luca Parmitano has made robotics history, reaching out from the International Space Station in orbit around Earth at 8 km/s, to control an Earth-based rover, equipped with an advanced ...

  • A Start-up’s Evolution from AI Lab to AI Business
    on November 26, 2019 at 6:47 pm

    However, the dexterity required to lift and move an object such as an iron rod, and set it down in the appropriate place, is a stretch for a machine. Partnering with FANUC, a large Japanese robotics ...

  • Getting Started with Cobots
    on November 26, 2019 at 1:29 pm

    Robots are not so lucky. Many robots use simple vacuum suction cups to pick something up, or two-finger grippers to grip a part like you or I might hold a glass of water. If the task requires finger ...

  • STMicro and maxon in automation collaboration
    on November 25, 2019 at 4:08 pm

    “The EVALKIT-ROBOT-1 kit accelerates development of next-generation robotics and automation that delivers advanced capabilities and dexterity with excellent reliability and ease of use.” ...

via Google News and Bing News

Robots fly in formation while autonomous blimps recognize hand gestures and detect faces

Georgia Tech’s Research Horizons

Georgia Institute of Technology researchers have created a team of free-flying robots that obeys the two rules of the air: don’t collide or undercut each other. They’ve also built autonomous blimps that recognize hand gestures and detect faces.

Both projects will be presented at the 2017 IEEE International Conference on Robotics and Automation (ICRA) May 29 – June 3 in Singapore.

In the first, five swarm quadcopters zip back and forth in formation, then change their behaviors based on user commands. The trick is to maneuver without smacking into each other or flying underneath another machine. If a robot cuts into the airstream of a higher flying quadcopter, the lower machine must quickly recover from the turbulent air or risk falling out of the sky.

“Ground robots have had built-in safety ‘bubbles’ around them for a long time to avoid crashing,” said Magnus Egerstedt, the Georgia Tech School of Electrical and Computer Engineering professor who oversees the project. “Our quadcopters must also include a cylindrical ‘do not touch’ area to avoid messing up the airflow for each other. They’re basically wearing virtual top hats.”

As long as the Georgia Tech machines avoid flying in the two-foot space below their neighbor, they can swarm freely without a problem. That typically means they dart around each other rather than going low.

Ph.D. student Li Wang figured out the size of the “top hat” one afternoon by hovering one copter in the air and sending others back and forth underneath it. Any closer than 0.6 of a meter (or five times the diameter from one rotor to another) and the machines were blasted to the ground. Then he created algorithms to allow them to change formation midflight.

“We figured out the smallest amount of modifications a quadcopter must make to its planned path to achieve the new formation,” said Wang. “Mathematically, that’s what a programmer wants — the smallest deviations from an original flight plan.”

The project is part of Egerstedt and Wang’s overall research, which focuses on easily controlling and interacting with large teams of robots.

“Our skies will become more congested with autonomous machines, whether they’re used for deliveries, agriculture or search and rescue,” said Egerstedt, who directs Georgia Tech’s Institute for Robotics and Intelligent Machines. “It’s not possible for one person to control dozens or hundreds of robots at a time. That’s why we need machines to figure it out themselves.”

The researchers overseeing the second project, the blimps, 3D-printed a gondola frame that carries sensors and a mini camera. It attaches to either an 18- or 36-inch diameter balloon. The smaller blimp can carry a five-gram payload; the larger one supports 20 grams.

The autonomous blimps detect faces and hands, allowing people to direct the flyers with movements. All the while, the machine gathers information about its human operator, identifying everything from hesitant glares to eager smiles. The goal is to better understand how people interact with flying robots.

“Roboticists and psychologists have learned many things about how humans relate to robots on the ground, but we haven’t created techniques to study how we react to flying machines,” said Fumin Zhang, the Georgia Tech associate professor leading the blimp project. “Flying a regular drone close to people presents a host of issues. But people are much more likely to approach and interact with a slow-moving blimp that looks like a toy.”

The blimps’ circular shape makes them harder to steer with manual controllers, but allows them to turn and quickly change direction. This is unlike the more popular zeppelin-shaped blimps commonly used by other researchers.

Zhang has filed a request with Guinness World Records for the smallest autonomous blimp. He sees a future where blimps can play a role in people’s lives, but only if roboticists can determine what people want and how they’ll react to a flying companion.

“Imagine a blimp greeting you at the front of the hardware store, ready to offer assistance,” Zhang said. “People are good at reading people’s faces and sensing if they need help or not. Robots could do the same. And if you needed help, the blimp could ask, then lead you to the correct aisle, flying above the crowds and out of the way.”

Learn more: Virtual Top Hats Allow Swarming Robots to Fly in Tight Formation

 

 

The Latest on: Flying robots

via Google News and Bing News

 

Stretchable bionic skin could give robots the ability to feel their environment

via Irish News

Discovery could lead to electronics printed on real human skin

Engineering researchers at the University of Minnesota have developed a revolutionary process for 3D printing stretchable electronic sensory devices that could give robots the ability to feel their environment. The discovery is also a major step forward in printing electronics on real human skin.

The research will be published in the next issue of Advanced Materials and is currently online.

“This stretchable electronic fabric we developed has many practical uses,” said Michael McAlpine, a University of Minnesota mechanical engineering associate professor and lead researcher on the study. “Putting this type of ‘bionic skin’ on surgical robots would give surgeons the ability to actually feel during minimally invasive surgeries, which would make surgery easier instead of just using cameras like they do now. These sensors could also make it easier for other robots to walk and interact with their environment.”

McAlpine, who gained international acclaim in 2013 for integrating electronics and novel 3D-printed nanomaterials to create a “bionic ear,” says this new discovery could also be used to print electronics on real human skin. This ultimate wearable technology could eventually be used for health monitoring or by soldiers in the field to detect dangerous chemicals or explosives.

“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” McAlpine said. “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”

McAlpine and his team made the unique sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device—a base layer of silicone, top and bottom electrodes made of a conducting ink, a coil-shaped pressure sensor, and a sacrificial layer that holds the top layer in place while it sets. The supporting sacrificial layer is later washed away in the final manufacturing process.

Surprisingly, all of the layers of “inks” used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. These flexible 3D printed sensors can stretch up to three times their original size.

“This is a completely new way to approach 3D printing of electronics,” McAlpine said. “We have a multifunctional printer that can print several layers to make these flexible sensory devices. This could take us into so many directions from health monitoring to energy harvesting to chemical sensing.”

Researchers say the best part of the discovery is that the manufacturing is built into the process.

“With most research, you discover something and then it needs to be scaled up. Sometimes it could be years before it ready for use,” McAlpine said. “This time, the manufacturing is built right into the process so it is ready to go now.”

The researchers say the next step is to move toward semiconductor inks and printing on a real body.

“The possibilities for the future are endless,” McAlpine said.

Learn more: 3D-printed ‘bionic skin’ could give robots the sense of touch

 

 

The Latest on: Bionic skin

via Google News and Bing News