How does the image-recognition technology in a self-driving car respond to a blurred shape suddenly appearing on the road? Researchers from KU Leuven have shown that machines can learn to respond to unfamiliar objects like human beings would.
Imagine heading home in your self-driving car. The rain is falling in torrents and visibility is poor. All of a sudden, a blurred shape appears on the road. What would you want the car to do? Should it hit the brakes, at the risk of causing the cars behind you to crash? Or should it just keep driving?
Human beings in a similar situation will usually be able to tell the difference between, say, a distracted cyclist who’s suddenly swerving, and road-side waste swept up by the wind. Our response is mostly based on intuition. We may not be sure what the blurred shape actually is, but we know that it looks likea human being rather than a paper bag.
But what about the self-driving car? Can a machine trained to recognize images tell us what the unfamiliar shape looks like? According to KU Leuven researchers Jonas Kubilius and Hans Op de Beeck, it can.
“Current state-of-the-art image-recognition technologies are taught to recognize a fixed set of objects,” Jonas Kubilius explains. “They recognize images using deep neural networks: complex algorithms that perform computations somewhat similarly to the neurons in the human brain.”
“We found that deep neural networks are not only good at making objective decisions (‘this is a car’), but also develop human-level sensitivities to object shape (‘this looks like …’). In other words, machines can learn to tell us what a new shape – say, a letter from a novel alphabet or a blurred object on the road – reminds them of. This means we’re on the right track in developing machines with a visual system and vocabulary as flexible and versatile as ours.”
Does that mean we may soon be able to safely hand over the wheel? “Not quite. We’re not there just yet. And even if machines will at some point be equipped with a visual system as powerful as ours, self-driving cars would still make occasional mistakes – although, unlike human drivers, they wouldn’t be distracted because they’re tired or busy texting. However, even in those rare instances when self-driving cars would err, their decisions would be at least as reasonable as ours.”
The Latest on: Deep neural networks
via Google News
The Latest on: Deep neural networks
- Facebook's Chief A.I. Scientist Yann LeCun On the Future of Computer Chips, Lawnmowers, and Deep Learning on February 18, 2019 at 3:08 pm
At the core of deep learning is software called a neural network, which sifts through enormous amounts of data so that it can notice patterns more quickly than humans. But this technology requires ... […]
- Facebook’s chief AI scientist: Deep learning may need a new programming language on February 18, 2019 at 9:30 am
conducting research to revive interest in neural networks and grow the popularity of deep learning. In recent years, advances in hardware — like field programmable gate arrays (FPGA), tensor ... […]
- SignalP 5.0 improves signal peptide predictions using deep neural networks on February 18, 2019 at 8:19 am
Signal peptides (SPs) are short amino acid sequences in the amino terminus of many newly synthesized proteins that target proteins into, or across, membranes. Bioinformatic tools can predict SPs from ... […]
- EchoNous Steps Upward with Release of Highly Reliable, All-Electronic Bladder Scanning Tool with State-of-the-Art Deep Learning Algorithm on February 18, 2019 at 7:30 am
To help nurses quickly determine bladder volume of patients, EchoNous Bladder combines data-rich fanning, a state-of-the-art algorithm using Convolutional Neural Networks (CNNs), and ... Critical to t... […]
- Lecture on Neural Networks Learning Quantum Chemistry Today on February 17, 2019 at 10:14 pm
In this talk, Isayev will present a fully transferable deep learning potential that is applicable to complex and diverse molecular systems well beyond the training dataset. Recently we introduced ANAK... […]
- Fear not deep fakes: OpenAI’s machine writes as senselessly as a chatbot speaks on February 15, 2019 at 2:00 pm
Scientists at the not-for-profit OpenAI this week released a neural network model that not only gobbles tons of ... The debut set off a swarm of headlines about new and dangerous forms of "deep fakes. ... […]
- RS Components announces availability of new Intel® Neural Compute Stick 2 – delivering fast deep-learning IoT development on February 15, 2019 at 6:55 am
a highly affordable small and fanless computer-vision and deep-neural-network (DNN) accelerator. As well as for use by data scientists and academics, the device is ideal for a wide selection of develo... […]
- Intel’s USB stick for developing deep neural networks on February 13, 2019 at 9:01 am
RS Components is stocking Intel’s Neural Compute Stick 2, for fast deep-learning IoT development. “As well as for use by data scientists and academics, the device is ideal for developers and engineers ... […]
- Machine learning, deep learning networks support drone security in UND research project on February 12, 2019 at 8:31 am
The research, which is concluding in early 2019, has expanded to include a swarm of small-scale UASs, giving UND students real-world, hands-on experience using machine learning and neural networks to ... […]
- Do deep neural networks ‘see’ like you and I do? on February 7, 2019 at 7:00 am
Computer components designed to recognize images behave surprisingly like neurons in the brain, researchers at the University of Washington School of Medicine report. “We found that deep neural networ... […]
via Bing News