Mathematical image processing creates a 3D movie of any scene, using just two frames from a stationary camera or microscope
Researchers at the Harvard School of Engineering and Applied Sciences (SEAS) have developed a way for photographers and microscopists to create a 3D image through a single lens, without moving the camera.
Published in the journal Optics Letters, this improbable-sounding technology relies only on computation and mathematics—no unusual hardware or fancy lenses. The effect is the equivalent of seeing a stereo image with one eye closed.
That’s easier said than done, as principal investigator Kenneth B. Crozier, John L. Loeb Associate Professor of the Natural Sciences, explains.
“If you close one eye, depth perception becomes difficult. Your eye can focus on one thing or another, but unless you also move your head from side to side, it’s difficult to gain much sense of objects’ relative distances,” Crozier says. “If your viewpoint is fixed in one position, as a microscope would be, it’s a challenging problem.”
Offering a workaround, Crozier and graduate student Antony Orth essentially compute how the image would look if it were taken from a different angle. To do this, they rely on the clues encoded within the rays of light entering the camera.
“Arriving at each pixel, the light’s coming at a certain angle, and that contains important information,” explains Crozier. “Cameras have been developed with all kinds of new hardware—microlens arrays and absorbing masks—that can record the direction of the light, and that allows you to do some very interesting things, such as take a picture and focus it later, or change the perspective view. That’s great, but the question we asked was, can we get some of that functionality with a regular camera, without adding any extra hardware?”
The key, they found, is to infer the angle of the light at each pixel, rather than directly measuring it (which standard image sensors and film would not be able to do). The team’s solution is to take two images from the same camera position but focused at different depths. The slight differences between these two images provide enough information for a computer to mathematically create a brand-new image as if the camera had been moved to one side.
By stitching these two images together into an animation, Crozier and Orth provide a way for amateur photographers and microscopists alike to create the impression of a stereo image without the need for expensive hardware. They are calling their computational method “light-field moment imaging”—not to be confused with “light field cameras” (like the Lytro), which achieve similar effects using high-end hardware rather than computational processing.
Importantly, the technique offers a new and very accessible way to create 3D images of translucent materials, such as biological tissues.
Biologists can use a variety of tools to create 3D optical images, including light-field microscopes, which are limited in terms of spatial resolution and are not yet commercially available; confocal microscopes, which are expensive; and a computational method called “shape from focus,” which uses a stack of images focused at different depths to identify at which layer each object is most in focus. That’s less sophisticated than Crozier and Orth’s new technique because it makes no allowance for overlapping materials, such as a nucleus that might be visible through a cell membrane, or a sheet of tissue that’s folded over on itself. Stereo microscopes may be the most flexible and affordable option right now, but they are still not as common in laboratories as traditional, monocular microscopes.
“This method devised by Orth and Crozier is an elegant solution to extract depth information with only a minimum of information from a sample,” says Conor L. Evans, an assistant professor at Harvard Medical School and an expert in biomedical imaging, who was not involved in the research. “Depth measurements in microscopy are usually made by taking many sequential images over a range of depths; the ability to glean depth information from only two images has the potential to accelerate the acquisition of digital microscopy data.”
“As the method can be applied to any image pair, microscopists can readily add this approach to our toolkit,” Evans adds. “Moreover, as the computational method is relatively straightforward on modern computer hardware, the potential exists for real-time rendering of depth-resolved information, which will be a boon to microscopists who currently have to comb through large data sets to generate similar 3D renders. I look forward to using their method in the future.”
The new technology also suggests an alternative way to create 3D movies for the big screen.
The Latest Bing News on:
Light-field moment imaging
- Transfer of orbital angular momentum of light to plasmonic excitations in metamaterialson June 12, 2020 at 11:06 am
The nature of light-matter interaction is governed by the spatial-temporal structures of a light field and material wave functions ... where nondestructive imaging of the near-field distribution is ...
- WIMI Hologram Cloud (NASDAQ:WIMI) Focuses on the Field of Visual AI Holographic ADSon June 2, 2020 at 12:10 am
Its commercial application scene is mainly concentrated in the home entertainment, light field cinema ... AR entertainment products. Based on imaging detection and recognition technology, template ...
- Sony Cyber-shot RX0 II Digital Cameraon May 20, 2020 at 5:00 pm
Premium imaging anywhere. The RX0 II compresses the high-quality ... can be used together for creative setups from multi-camera angles, 3D, bullet-time to light-field arrays and even volumetric, ...
- Can you really re-light in post? [opinion]on June 21, 2018 at 4:03 pm
All photographs and videos are patterns of light which the viewer’s brain reverse-engineers into a three-dimensional scene, just as our brains reverse-engineer the patterns of light on the retinae ...
The Latest Google Headlines on:
Light-field moment imaging
The Latest Bing News on:
Mathematical image processing
- Protesters denounce changes to Quebec experience programon June 27, 2020 at 12:29 pm
Demonstrators denounced proposed changes to a Quebec immigration program that fast-tracks foreign students and workers, describing the reforms during several protests Saturday as a dehumanizing vision ...
- Researchers create easy-to-use math-aware search interfaceon June 24, 2020 at 10:39 am
Researchers at Rochester Institute of Technology have developed MathDeck, an online search interface that allows anyone to easily create, edit and lookup sophisticated math formulas on the computer.
- RIT researchers create easy-to-use math-aware search interfaceon June 24, 2020 at 9:48 am
The math-aware search interface is free to the public and available ... uploading a typeset formula image and text input using LaTeX. Using image processing and machine learning techniques, the ...
- It’s an Exciting Time for Software Developers Who Want to Use FPGAson June 19, 2020 at 10:51 am
The vast majority of embedded software developers focus on coding for MPUs and MCUs; relatively few have any experience with regard to using field-programmable gate arrays (FPGAs).
- FPGA or GPU? - The evolution continueson June 9, 2020 at 5:00 pm
A 384-core GPU can run 384 floating-point math operations every clock cycle. This capacity makes GPUs a natural fit for floating-point-intensive signal- and image-processing applications. In fact, ...
- InAccel Provides Browser-based Access to FPGAson June 3, 2020 at 7:36 am
Common Vitis accelerated-libraries for Math, Statistics, Linear Algebra ... libraries offer out-of-the-box acceleration for workloads like Vision and Image Processing, Quantitative Finance, Database, ...
- 'It’s fundamental': Graphcore CEO believes new kinds of AI will prove the worth of a new kind of computeron May 31, 2020 at 8:43 am
With the boom in artificial intelligence in recent years, an ideal workload has arrived, a kind of software programming that naturally gets better as its mathematical operations are spread across ...
- Game changer in signal processingon May 28, 2020 at 5:00 pm
Enter the Graphics Processing Unit (GPU). This specialized silicon combines large numbers of arithmetic logic units that can execute thousands of fairly simple math instructions simultaneously ...
- Sam Francison May 27, 2020 at 5:00 pm
He started its decade of execution in 1971 with commencing a period of intense psychiatric processing along the guidelines of ... His progression from organic, fluid strokes to strict mathematical ...
- Department of Computer Engineeringon May 27, 2020 at 5:00 pm
Computer engineers decide on the number and configuration of processing units that perform billions of mathematical operations per ... challenges that are being tackled in the Department of Computer ...