Researchers from the School of Interactive Computing and the Institute for Robotics and Intelligent Machines developed a new method that teaches computers to “see” and understand what humans do in a typical day.
The technique gathered more than 40,000 pictures taken every 30 to 60 seconds, over a 6 month period, by a wearable camera and predicted with 83 percent accuracy what activity that person was doing. Researchers taught the computer to categorize images across 19 activity classes. The test subject wearing the camera could review and annotate the photos at the end of each day (deleting any necessary for privacy) to ensure that they were correctly categorized.
“It was surprising how the method’s ability to correctly classify images could be generalized to another person after just two more days of annotation,” said Steven Hickson, a Ph.D. candidate in Computer Science and a lead researcher on the project.
“This work is about developing a better way to understand people’s activities, and build systems that can recognize people’s activities at a finely-grained level of detail,” said Edison Thomaz, co-author and graduate research assistant in the School of Interactive Computing. “Activity tracking devices like the Fitbit can tell how many steps you take per day, but imagine being able to track all of your activities – not just physical activities like walking and running. This work is moving toward full activity intelligence. At a technical level, we are showing that it’s becoming possible for computer vision techniques alone to be used for this.”
The group believes they have gathered the largest annotated dataset of first-person images to demonstrate that deep-learning can understand human behavior and the habits of a specific person.
Student Daniel Casto, a Ph.D. candidate in Computer Science and a lead researcher on the project, helped present the method earlier this month at UBICOMP 2015 in Osaka, Japan. He says reaction from conference-goers was positive.
“People liked that we had a method that combines time and images,” Castro says. “Time (of activity) can be especially important for some activity classes. This system learned how relevant images were because of people’s schedules. What does it think the image is showing? It sees both time and image probabilities and makes a better prediction.”
The ability to literally see and recognize human activities has implications in a number of areas – from developing improved personal assistant applications like Siri to helping researchers explain links between health and behavior, Thomaz says.
Castro and Hickson believe that someday within the next decade we will have ubiquitous devices that can improve our personal choices throughout the day.
“Imagine if a device could learn what I would be doing next – ideally predict it – and recommend an alternative?” Castro says. “Once it builds your own schedule by knowing what you are doing, it might tell you there is a traffic delay and you should leave sooner or take a different route.”
The Latest on: Deep learning
via Google News
The Latest on: Deep learning
- AI Weekly: A deep learning pioneer’s teachable moment on AI biason June 26, 2020 at 11:57 am
Facebook chief AI scientist Yann LeCun got into a debate with Google AI ethics co-lead Timnit Gebru about bias. Here are some key lessons to be learned.
- Video: Using deep learning to combat cheating in Counter-Strikeon June 24, 2020 at 12:08 pm
In this GDC 2018 talk Valve's John McDonald discusses how the company has utilized deep learning machine learning techniques to combat cheating in Counter-Strike: Global Offensive .
- How Product Placement Works In 2020 - With AI, Deep Learning And Moreon June 24, 2020 at 11:35 am
Lela London chats to BEN, the Bill Gates-owned product placement agency behind most of your streaming screen's magic.
- DIAWAY, Excelero Partner to Launch DIAWAY KEILA for AI/ML/Deep Learning and HPC Workloadson June 24, 2020 at 8:12 am
DIAWAY, the big-data storage and networking integrator, announced a strategic partnership with Excelero and the launch ...
- Deep Learning Market Growth 2020, Trends, Size, Share and Forecast By 2025on June 23, 2020 at 10:47 pm
According to the latest report by IMARC Group, titled “Deep Learning Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2020-2025,” the global deep learning market size is ...
- Assessing Injury Risk With Zone7’s Deep Learningon June 23, 2020 at 4:20 am
He ultimately settled on Zone7, which ingests data from wearables, fitness assessments and medical profiles and then uses deep learning to flag players who are at risk for injury due to recent ...
- Baidu’s deep-learning platform fuels the rise of industrial AIon June 22, 2020 at 8:12 am
AI is driving industrial transformation across a variety of sectors, and we’re just beginning to scratch the surface of AI capabilities. Some industrial innovations are barely noticed, such as forest ...
- Age and sex affect deep learning prediction of cardiometabolic risk factors from retinal imageson June 19, 2020 at 9:30 am
Deep neural networks can extract clinical information, such as diabetic retinopathy status and individual characteristics (e.g. age and sex), from retinal images. Here, we report the first study to ...
- Deep Learning-Based Surrogate Models Outperform Simulators and Could Hasten Scientific Discoverieson June 18, 2020 at 9:38 am
Surrogate models supported by neural networks can perform as well, and in some ways better, than computationally expensive simulators and ...
- The startup making deep learning possible without specialized hardwareon June 18, 2020 at 2:28 am
The discovery that led Nir Shavit to start a company came about the way most discoveries do: by accident. The MIT professor was working on a project to reconstruct a map of a mouse’s brain and needed ...
via Bing News