Researchers from the School of Interactive Computing and the Institute for Robotics and Intelligent Machines developed a new method that teaches computers to “see” and understand what humans do in a typical day.
The technique gathered more than 40,000 pictures taken every 30 to 60 seconds, over a 6 month period, by a wearable camera and predicted with 83 percent accuracy what activity that person was doing. Researchers taught the computer to categorize images across 19 activity classes. The test subject wearing the camera could review and annotate the photos at the end of each day (deleting any necessary for privacy) to ensure that they were correctly categorized.
“It was surprising how the method’s ability to correctly classify images could be generalized to another person after just two more days of annotation,” said Steven Hickson, a Ph.D. candidate in Computer Science and a lead researcher on the project.
“This work is about developing a better way to understand people’s activities, and build systems that can recognize people’s activities at a finely-grained level of detail,” said Edison Thomaz, co-author and graduate research assistant in the School of Interactive Computing. “Activity tracking devices like the Fitbit can tell how many steps you take per day, but imagine being able to track all of your activities – not just physical activities like walking and running. This work is moving toward full activity intelligence. At a technical level, we are showing that it’s becoming possible for computer vision techniques alone to be used for this.”
The group believes they have gathered the largest annotated dataset of first-person images to demonstrate that deep-learning can understand human behavior and the habits of a specific person.
Student Daniel Casto, a Ph.D. candidate in Computer Science and a lead researcher on the project, helped present the method earlier this month at UBICOMP 2015 in Osaka, Japan. He says reaction from conference-goers was positive.
“People liked that we had a method that combines time and images,” Castro says. “Time (of activity) can be especially important for some activity classes. This system learned how relevant images were because of people’s schedules. What does it think the image is showing? It sees both time and image probabilities and makes a better prediction.”
The ability to literally see and recognize human activities has implications in a number of areas – from developing improved personal assistant applications like Siri to helping researchers explain links between health and behavior, Thomaz says.
Castro and Hickson believe that someday within the next decade we will have ubiquitous devices that can improve our personal choices throughout the day.
“Imagine if a device could learn what I would be doing next – ideally predict it – and recommend an alternative?” Castro says. “Once it builds your own schedule by knowing what you are doing, it might tell you there is a traffic delay and you should leave sooner or take a different route.”
The Latest on: Deep learning
via Google News
The Latest on: Deep learning
- Global Deep Learning Industryon March 23, 2020 at 4:47 pm
Deep Learning market worldwide is projected to grow by US$31 Billion, driven by a compounded growth of 41%. Hardware, one of the segments analyzed and sized in this study, displays the potential to ...
- Deep Learning for Symbolic Mathematicson March 23, 2020 at 2:53 pm
All accuracies above are given using a beam search of size 10. Note that these datasets and models slightly differ from the ones used in the paper. data.prefix and data.infix are two parallel files ...
- Nvidia launches Deep Learning Super Sampling 2.0 to boost AI renderingon March 23, 2020 at 11:24 am
Alongside the RTX Global Illumination SDK, Nvidia also launched Deep Learning Super Sampling (DLSS) 2.0 today. At the heart of DLSS 2.0 is an artificial neural network that uses Nvidia RTX TensorCores ...
- Deep-learning-based image segmentation integrated with optical microscopy for automatically searching for two-dimensional materialson March 23, 2020 at 3:16 am
The recent advances in deep-learning technologies based on neural networks have led to the emergence of high-performance algorithms for interpreting images, such as object detection 1,2,3,4,5, ...
- Deep Learning Market Demand, Top Players, Future Forecast and Opportunities To 2026on March 23, 2020 at 1:20 am
Deep Learning Market 2020-2026 The Deep Learning Market research report mainly studies the market size, latest trends and development status of the market, as well as investment opportunities, market ...
- Analysis on North America's Deep Learning Chipset Industry, 2019-2025: Anticipating a CAGR of 35.4%on March 21, 2020 at 8:33 am
The "North America Deep Learning Chipset Market, by Type, by Technology, by End User, by Country, Industry Analysis and Forecast, 2019 - 2025" report has been added to ResearchAndMarkets.com's ...
- Deep Learning Predicts Stroke-Lesion Changes at 1 Weekon March 20, 2020 at 6:47 am
A deep learning algorithm is comparable or even superior to common clinical measures for predicting infarct size and location up to a week following acute ischemic stroke, new research suggests.
- North American Deep Learning Chipset Market by Type, Technology, End-user and Country - Forecast to 2025on March 19, 2020 at 5:53 am
Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.
- Hybrid AI systems are quietly solving the problems of deep learningon March 19, 2020 at 3:54 am
Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and ...
- Deep learning changes scientific research, finds antibiotic for multi-resistant bacteriaon March 18, 2020 at 12:55 pm
Scientists at MIT and Harvard’s Broad Institute and MIT’s CSAIL built a deep learning network that can acquire a broad representation of molecular structure and thereby discover novel antibiotics.
via Bing News