Mathematical image processing creates a 3D movie of any scene, using just two frames from a stationary camera or microscope
Researchers at the Harvard School of Engineering and Applied Sciences (SEAS) have developed a way for photographers and microscopists to create a 3D image through a single lens, without moving the camera.
Published in the journal Optics Letters, this improbable-sounding technology relies only on computation and mathematics—no unusual hardware or fancy lenses. The effect is the equivalent of seeing a stereo image with one eye closed.
That’s easier said than done, as principal investigator Kenneth B. Crozier, John L. Loeb Associate Professor of the Natural Sciences, explains.
“If you close one eye, depth perception becomes difficult. Your eye can focus on one thing or another, but unless you also move your head from side to side, it’s difficult to gain much sense of objects’ relative distances,” Crozier says. “If your viewpoint is fixed in one position, as a microscope would be, it’s a challenging problem.”
Offering a workaround, Crozier and graduate student Antony Orth essentially compute how the image would look if it were taken from a different angle. To do this, they rely on the clues encoded within the rays of light entering the camera.
“Arriving at each pixel, the light’s coming at a certain angle, and that contains important information,” explains Crozier. “Cameras have been developed with all kinds of new hardware—microlens arrays and absorbing masks—that can record the direction of the light, and that allows you to do some very interesting things, such as take a picture and focus it later, or change the perspective view. That’s great, but the question we asked was, can we get some of that functionality with a regular camera, without adding any extra hardware?”
The key, they found, is to infer the angle of the light at each pixel, rather than directly measuring it (which standard image sensors and film would not be able to do). The team’s solution is to take two images from the same camera position but focused at different depths. The slight differences between these two images provide enough information for a computer to mathematically create a brand-new image as if the camera had been moved to one side.
By stitching these two images together into an animation, Crozier and Orth provide a way for amateur photographers and microscopists alike to create the impression of a stereo image without the need for expensive hardware. They are calling their computational method “light-field moment imaging”—not to be confused with “light field cameras” (like the Lytro), which achieve similar effects using high-end hardware rather than computational processing.
Importantly, the technique offers a new and very accessible way to create 3D images of translucent materials, such as biological tissues.
Biologists can use a variety of tools to create 3D optical images, including light-field microscopes, which are limited in terms of spatial resolution and are not yet commercially available; confocal microscopes, which are expensive; and a computational method called “shape from focus,” which uses a stack of images focused at different depths to identify at which layer each object is most in focus. That’s less sophisticated than Crozier and Orth’s new technique because it makes no allowance for overlapping materials, such as a nucleus that might be visible through a cell membrane, or a sheet of tissue that’s folded over on itself. Stereo microscopes may be the most flexible and affordable option right now, but they are still not as common in laboratories as traditional, monocular microscopes.
“This method devised by Orth and Crozier is an elegant solution to extract depth information with only a minimum of information from a sample,” says Conor L. Evans, an assistant professor at Harvard Medical School and an expert in biomedical imaging, who was not involved in the research. “Depth measurements in microscopy are usually made by taking many sequential images over a range of depths; the ability to glean depth information from only two images has the potential to accelerate the acquisition of digital microscopy data.”
“As the method can be applied to any image pair, microscopists can readily add this approach to our toolkit,” Evans adds. “Moreover, as the computational method is relatively straightforward on modern computer hardware, the potential exists for real-time rendering of depth-resolved information, which will be a boon to microscopists who currently have to comb through large data sets to generate similar 3D renders. I look forward to using their method in the future.”
The new technology also suggests an alternative way to create 3D movies for the big screen.
The Latest on: Light-field moment imaging
Google said to be acquiring imaging startup Lytro for $40 million
on March 22, 2018 at 12:57 am
The acquisition might also include Lytro’s 59 patents related to light-field and other digital imaging technology ... It is unclear what Google intends to with Lytro at the moment, but Lytro is now in the hands of one of the biggest tech companies ... […]
Awesome tech you can’t buy yet: Folding helmets and emojis for your carna
on December 10, 2017 at 10:23 am
At any given moment, there are approximately a zillion different crowdfunding ... so do your homework before cutting a check for the gadget of your dreams. You know those “light field” Lytro cameras that allow you to take a picture, then adjust the ... […]
Stanford built a '4D' camera for cars, robots and VR
on July 24, 2017 at 5:00 pm
That small device can adjust the focus of an image, because it also uses light field imaging tech. The researchers compare ... in one picture could lead to more seamless renderings. At the moment, the device is still in its proof-of-concept stage and ... […]
Lytro’s Developer Kit Lets Others Make the Cameras They Need, NASA Is Among the First
on November 6, 2014 at 6:11 pm
Back in April, Lytro introduced the atypical Illum Light Field camera ... but at this moment, Lytro is envisioning its technology going into the scientific and medical imaging spheres, as well as video and film production. Eventually, the setup will ... […]
Seeing depth through a single lens
on August 5, 2013 at 8:47 am
Researchers at the Harvard School of Engineering and Applied Sciences (SEAS) have developed a way for photographers and microscopists to create a 3D image through a single lens ... their computational method "light-field moment imaging"—not to be ... […]
Sound waves let us see light deep within organs
on October 5, 2012 at 9:47 am
So, unless you already know what the object you are imaging looks like, you aren't going to get any pictures. Let's pretend for a moment that we had some magic ... Thanks to the scattering, the exciting light field is very weak everywhere but the focus ... […]
Light-Field Photography Revolutionizes Imaging
on April 29, 2012 at 5:00 pm
Now another of his flights of imagination has finally been realized—an imaging device capable of capturing every optical aspect of the scene before it. Lytro, a Silicon Valley start‑up, has just launched the world’s first consumer light-field ... […]
Lytro’s light field camera technology could supercharge future iPhones
on January 24, 2012 at 10:50 am
Jobs may have found the solution he was looking for in a radical imaging technology from Lytro. To that end, Jobs apparently met with Lytro CEO Ren Ng in June 2011 to discuss how Apple might integrate Lytro's light field technology into its products. […]
How The New, Ground-Breaking Lytro Camera Works
on June 22, 2011 at 6:49 pm
Lytro is a new camera/software company that uses light fields to focus on every angle of a photograph once it's snapped, not just the angle the photographer chooses in the moment ... "The light field is a core concept in imaging science," says Lytro. […]
via Google News and Bing News