Computers will someday soon automatically provide short video digests of a day in your life, your family vacation or an eight-hour police patrol, say computer scientists at The University of Texas at Austin.
The researchers are working to develop tools to help make sense of the vast quantities of video that are going to be produced by wearable camera technology such as Google Glass and Looxcie.
“The amount of what we call ‘egocentric’ video, which is video that is shot from the perspective of a person who is moving around, is about to explode,” said Kristen Grauman, associate professor of computer science in the College of Natural Sciences. “We’re going to need better methods for summarizing and sifting through this data.”
Grauman and her colleagues developed a superior technique that uses machine learning to automatically analyze recorded videos and assemble a better short “story” of the footage than what is available from existing methods.
Better video summarization should prove important in helping military commanders managing data coming in from soldiers’ cameras, investigators trying to sift through cellphone video data in the wake of disasters like the Boston Marathon bombing, and senior citizens using video summaries of their days to compensate for memory loss, said Grauman.
“There’s research showing that if people suffering from memory loss wear a camera that takes a snapshot once a minute, and then they review those images at the end of the day, it can help their recall,” said Grauman. “That’s pretty inspiring. What if instead of images that were selected just because they were a minute apart, they had a video or photographic summary that was selected because it told a good story? Maybe that would help even more. That’s the kind of thing we’re hoping to achieve.”
Grauman, her postdoc Lu Zheng and doctoral student Yong Jae Lee presented their method, which they call “story-driven” video summarization, at the IEEE Conference on Computer Vision and Pattern Recognition this summer.
Their findings are based on video amassed by volunteers wearing commercially available Looxcie cameras, which cost about $200, record five hours of video at a stretch, connect to smartphones and fit in an ear as a large Bluetooth device does.
“The task is to take a very long video and automatically condense it into very short video clips, or a series of stills, that convey the essence of the story,” said Grauman. “To do that, though, we first have to ask: What makes a good visual story? Our answer is that beyond displaying important persons, objects and scenes, it must also convey how one thing leads to the next.”
To tackle the challenge, Grauman and her colleagues took a two-step approach. The first step involved using machine learning techniques to teach their system to “score” the significance of objects in view based on egocentric factors such as how often the objects appeared in the center of the frame, which is a good proxy for where the camera wearer’s gaze is, or whether they are touched by the wearer’s hands.
“If you give us a region in the video, then we will give back an importance level, based on all those properties that we have extracted and learned how to combine,” said Grauman. “So at that point you can select frames that will maximize the importance.”
The next step was to use those important frames, through the video, and look for early ones that influence later ones. To do that they adapted a method developed by researchers at Carnegie Mellon University that could predict how one news article leads to another, assembling a series of articles to transition from a starting point to a known end point.
For the text work, researchers used word frequencies and correlations across articles to quantify influence. For the video work, Grauman and Lu used their significant objects and frames to do the same. Then they were able to identify a chain of video clips that efficiently filled in the story from beginning to end.
“We ran human ‘taste tests’ comparing our method to previous methods,” said Grauman, “and between 75 and 90 percent of people evaluating the summaries, depending on the datasets and method being compared, found that our system is superior.”
Grauman said that as video summarization techniques continue to improve, they will become invaluable aids not just to people with very specialized needs, like police investigators and those suffering from memory loss, but to everyday Web surfers as well.
The Latest on: Egocentric video
- Custom Paper Writers on November 17, 2017 at 12:23 pm
Can cloud computing can be translated to improvement in students failing to graduate outcomes, to the labour market, including by introducing me to key concepts and tools for video research in science teaching. Dont feel that this shows their egocentrism. […]
- Young Children and Grief ; What Preschool Teachers Need To Know on November 16, 2017 at 12:23 am
Children at this age are firmly in Piaget’s pre-operational stage which is marked by an egocentric view of the world ( it revolves ... I wanted to add this video – this was my first real experience with death like many others around my age. […]
- Forget laughter, for Philly and especially me, Carson Wentz and the Eagles are the best medicine on November 13, 2017 at 11:48 am
Nothing he says ever comes across as scripted, just as nothing is ever egocentric or off-putting ... Is there a more palatable explanation for the video of Halladay swooping precariously close to the water, time and again, in the new single-engine plane ... […]
- Jay-Z didn't break attendance records, but he and a band rocked New Orleans' Smoothie King Center on November 9, 2017 at 9:08 pm
In the lyrics, he urges himself to kill off the old Jay-Z, the chip-on-his-shoulder Jay-Z, the egocentric, guarded Jay-Z who could ... Four enormous sets of double-sided video screens – imagine four giant, partially open books, where both the front ... […]
- Kawasaki Versys-X 300 on November 8, 2017 at 5:42 pm
Ok, thanks. Hard to find hirebikes that are not egocentric... plenty of big fat hire bikes no small gravel worthy bikes. Great video, again. I really like that you just had chat on the soundtrack. No heavy metal or whatever. Enjoy your riding, […]
- Countdown to Launch: How to Come Up with Great Testing Ideas on November 8, 2017 at 12:09 am
The only problem is, as marketers and business owners, we have a tendency towards egocentrism. There are so many things ... What did this client really want people to do? Watch a video? Read a review? Look at the picture? Read the Q&A? […]
- Apple is working on a solution to its infuriating letter ‘i’ bug on November 6, 2017 at 8:16 am
Apple clearly wants its users to stop being so egocentric. A bug in the latest version of iOS ... a dedicated support page on its site that describes a workaround. Related video Robots are the next step in using technology to explore sexuality. […]
- Terre Thaemlitz announces multimedia album Deproduction on November 6, 2017 at 12:00 am
The release comprises an SD card containing audio, video and texts and will examine the impact of ... the anticipated promise behind today’s Queer families is nothing more than the egocentric notion that familial abuses will be resolved by this ... […]
- Video: Three Teens Sneak Onto The New $3.9 Billion Tappan Zee Bridge on August 15, 2017 at 6:25 am
Watch the video of the Tappan Zee infiltration below. It's a solid (if a bit egocentric) document of rookie recreational trespass. Don't get too mad at the kids, either—nothing got vandalized and nobody got hurt. Speaking from personal experience ... […]
- A Video that Perfectly Illustrates How Egocentric Humankind has Become on October 19, 2015 at 10:18 am
This is an animated film, but for some reason it was still hard to watch. There is so much truth in this short video, but I’m not so sure we can handle the truth. It hurts. But there is hope. Followers of the way of Jesus can choose something better. […]
via Google News and Bing News