Compressive Light Field Photography
Publication: Compressive Light Field Photography Using Overcomplete Dictionaries And Optimized Projections, Technical Paper, ACM Siggraph 2013
Kshitij Marwah (MIT Media Lab) , Gordon Wetzstein, Yosuke Bando and Ramesh Raskar
Light field reconstruction from a single coded projection. We explore sparse reconstructions of 4D light fields from optimized 2D projections using light field atoms as the fundamental building blocks of natural light fields. This example shows a coded sensor image captured with our camera prototype (upper left), and the recovered 4D light field (lower left and center). Parallax is successfully recovered (center insets) and allows for post-capture refocus (right). Even complex lighting effects, such as occlusion, specularity, and refraction, can be recovered, being exhibited by the background, dragon, and tiger, respectively.
Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components:light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding techniques, including 4D light field compression and denoising.
Kshitij Marwah (MIT Media Lab) , Gordon Wetzstein, Ashok Veeraraghavan and Ramesh Raskar
Light ﬁeld cameras, e.g. Lytro and Raytrix, have ushered a new direction in photography allowing consumers to synthesize photographs with novel viewpoints or varying focus after the actual recording. Unfortunately, currentlight ﬁeld camera designs impose a ﬁxed tradeoff between spatial and angular resolution — spatial resolution is reduced to capture angular light variation on the sensor. We introduce a principled computational framework and a new camera design to acquire and reconstruct light ﬁelds at full spatial and angular resolution from a single exposure.
Our framework introduces a high-dimensional sparse basis for light ﬁelds learned from millions of light ﬁelds patches. The same optimization procedure also allows for the synthesis of optimal mask patterns that are mounted at a slight offset in front of the sensor and optically attenuate the light ﬁeld before it is recorded by the sensor. Finally, a weighted compressive sensing-style reconstruction is performed to recover the light ﬁeld. We demonstrate how our compressive approach to light ﬁeld photography outperform state of art techniques.
November 2012, Opportunity Notes by Rafe Needleman, MIT Light Field FilterNovember 2012, Demo at MIT Technology Review Emerging Technologies
September 2012, Compressive Light Field Photography, Nokia Research Center, Bangalore, India
March 2012, Computational Photography and the 41 Megapixel Camera, Nokia Research Center, Palo Alto, USA