Skip to main content

in this section

Cornell ECE researchers win Best Paper Award for flexible light field sensor architecture to help develop cameras of the future

Tuesday, June 10, 2014

Researchers at Cornell ECE are changing the idea of a traditional camera in order to capture more information about the world around us.

A team including members of Associate Professor Al Molnar’s group and researchers from the MIT Media Lab has developed flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, the team has shown that—contrary to light field cameras today—their system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.

The team’s findings were published in the paper, A Switchable Light Field Camera Architecture with Angle Sensitive Pixels and Dictionary-based Sparse Coding, which received the Best Paper Award at the the 2014 IEEE International Conference on Computational Photography (ICCP) Conference held May 2-4 on the Intel Campus in Santa Clara, California.

"Angle Sensitive Pixels are a novel class of image sensors that capture angular information about the 4D light field, a useful representation of light used in computer graphics and computational photography, and that is not captured by conventional CMOS image sensors," said Suren Jayasuriya, a Cornell ECE Ph.D. student who co-authored the paper.

Light field information is used in a variety of algorithms including illumination rendering and visualization, computational refocusing, and 3D volumetric reconstruction and depth mapping. Traditionally, there has been a tradeoff in light field sensors of sacrificing spatial resolution in order to capture this angular information.

The paper uses sparsity-constrained optimization to regain this lost spatial resolution, resulting in higher resolution 4D light field data, while still maintaining high resolution when taking a traditional 2D image.

The team includes Matthew Hirsch, MIT Media Lab; Sriram Sivaramakrishnan, Ph.D. student Cornell School of Electrical and Computer Engineering (ECE); Suren Jayasuriya, Ph.D. student Cornell, ECE; Albert Wang '12, Ph.D. Cornell ECE; Alyosha Molnar, Associate Professor, Cornell, ECE; Ramesh Raskar, Associate Professor, MIT Media Lab; and Gordon Wetzstein, Research Scientist, MIT Media Lab.

"Computational photography is at the intersection of optics, sensor electronics, applied mathematics, and high performance computing," said Suren. "Angle Sensitive Pixels with dictionary-based sparse coding (such as that utilized in this paper) combines the design of all the aspects of computational photography to achieve an unprecedented amount of flexibility in computational light field imaging."

Their research could potentially influence the use of advanced optimization algorithms coupled with novel sensor technology (i.e. Angle Sensitive Pixels (ASPs)) to capture 4D light fields in high resolution, evolving the concept of a traditional camera to use computation and algorithms in addition to new image sensors, helping to develop the cameras of tomorrow.

For more information, visit

back to listing