Computers will someday soon automatically provide short video digests of a day in your life, your family vacation or an eight-hour police patrol, say computer scientists at The University of Texas at Austin.
The researchers are working to develop tools to help make sense of the vast quantities of video that are going to be produced by wearable camera technology such as Google Glass and Looxcie.
“The amount of what we call ‘egocentric’ video, which is video that is shot from the perspective of a person who is moving around, is about to explode,” said Kristen Grauman, associate professor of computer science in the College of Natural Sciences. “We’re going to need better methods for summarizing and sifting through this data.”
Grauman and her colleagues developed a superior technique that uses machine learning to automatically analyze recorded videos and assemble a better short “story” of the footage than what is available from existing methods.
Better video summarization should prove important in helping military commanders managing data coming in from soldiers’ cameras, investigators trying to sift through cellphone video data in the wake of disasters like the Boston Marathon bombing, and senior citizens using video summaries of their days to compensate for memory loss, said Grauman.
“There’s research showing that if people suffering from memory loss wear a camera that takes a snapshot once a minute, and then they review those images at the end of the day, it can help their recall,” said Grauman. “That’s pretty inspiring. What if instead of images that were selected just because they were a minute apart, they had a video or photographic summary that was selected because it told a good story? Maybe that would help even more. That’s the kind of thing we’re hoping to achieve.”
Grauman, her postdoc Lu Zheng and doctoral student Yong Jae Lee presented their method, which they call “story-driven” video summarization, at the IEEE Conference on Computer Vision and Pattern Recognition this summer.
Their findings are based on video amassed by volunteers wearing commercially available Looxcie cameras, which cost about $200, record five hours of video at a stretch, connect to smartphones and fit in an ear as a large Bluetooth device does.
“The task is to take a very long video and automatically condense it into very short video clips, or a series of stills, that convey the essence of the story,” said Grauman. “To do that, though, we first have to ask: What makes a good visual story? Our answer is that beyond displaying important persons, objects and scenes, it must also convey how one thing leads to the next.”
To tackle the challenge, Grauman and her colleagues took a two-step approach. The first step involved using machine learning techniques to teach their system to “score” the significance of objects in view based on egocentric factors such as how often the objects appeared in the center of the frame, which is a good proxy for where the camera wearer’s gaze is, or whether they are touched by the wearer’s hands.
“If you give us a region in the video, then we will give back an importance level, based on all those properties that we have extracted and learned how to combine,” said Grauman. “So at that point you can select frames that will maximize the importance.”
The next step was to use those important frames, through the video, and look for early ones that influence later ones. To do that they adapted a method developed by researchers at Carnegie Mellon University that could predict how one news article leads to another, assembling a series of articles to transition from a starting point to a known end point.
For the text work, researchers used word frequencies and correlations across articles to quantify influence. For the video work, Grauman and Lu used their significant objects and frames to do the same. Then they were able to identify a chain of video clips that efficiently filled in the story from beginning to end.
“We ran human ‘taste tests’ comparing our method to previous methods,” said Grauman, “and between 75 and 90 percent of people evaluating the summaries, depending on the datasets and method being compared, found that our system is superior.”
Grauman said that as video summarization techniques continue to improve, they will become invaluable aids not just to people with very specialized needs, like police investigators and those suffering from memory loss, but to everyday Web surfers as well.
The Latest on: Egocentric video
- Pierce’s Celtics jersey retirement on January 18, 2018 at 7:01 am
That said, Pierce is being egocentric when he insists that the day should belong ... Celtics will already have occurred before he takes to the stage. By his reckoning, videos of his exploits should be played constantly during lulls in the set-to, not ... […]
- The Fascinating Things You Didn’t Know About Tiffany Trump on January 13, 2018 at 12:00 am
Leave it to the egocentric Donald Trump to name his child after a business ... For them, taking photos and videos for Instagram and Snapchat is not a way to memorialize a night out. It’s the night’s main event.” Tiffany will soon be even closer ... […]
- Teen Titans Go! to the Movies gets a poster and announcement video ahead of Wednesday’s trailer on January 9, 2018 at 8:08 pm
has released an announcement video and poster for the upcoming DC animated feature ... Teen Titans GO! to the Movies finds our egocentric, wildly satirical Super Heroes in their first feature film extravaganza—a fresh, gleefully clever, kid-appropriately ... […]
- BEL MOONEY: My wife has left me after 37 years, for 'a cult' on December 1, 2017 at 3:00 pm
She thinks she’s found the solution to her problems and her ‘true friends’ — spending hours listening to their videos. I’m convinced she ... death and the self — and feels very egocentric. But, always open to talk of ‘the spirit’ and ... […]
- Video: Three Teens Sneak Onto The New $3.9 Billion Tappan Zee Bridge on August 15, 2017 at 6:25 am
Watch the video of the Tappan Zee infiltration below. It's a solid (if a bit egocentric) document of rookie recreational trespass. Don't get too mad at the kids, either—nothing got vandalized and nobody got hurt. Speaking from personal experience ... […]
- Randy Rainbow’s Trump Debate Spoof Is Super-Callous Fragile-Egocentric Braggadocious (Video) on September 29, 2016 at 11:32 am
Donald Trump just got Randy Rainbow’ed. The internet’s favorite new spoof man, Randy Rainbow, unleashed his latest video taking on the first presidential debate with an extra bit of gay. Rainbow, whose viral parodies include Gary Johnson’s “What is ... […]
- Julius von Bismarck Talks About His 'Egocentric System' at Art Basel 2015 (VIDEO) on June 23, 2015 at 9:03 am
One of the highlights of Art Basel in Basel 2015 was German artist Julius von Bismarck's artwork Egocentric System in the Unlimited sector of the fair. Egocentric System is a live performance by Julius von Bismarck on a rotating paraboloid, spanning the ... […]
- What's motivating some of Obama's black critics? on July 23, 2013 at 2:23 pm
When Obama opted not to attend in 2009 -- although he did address the crowd via video conference -- that was the proverbial ... Instead, Smiley and West appear to be two egocentric men who believe they alone are the face of black intellectualism. […]
- How Egocentric Can You Get? The Google Diaries. on December 17, 2007 at 5:38 am
I'm guessing it's egocentrism we must be talking about ... Facebook and MySpace are encouraging users to post their home videos, photographs and personal profiles online, including data ranging from their favorite movies to their cell phone number." […]
via Google News and Bing News