SweepCam
Lensless Cameras
Lensless cameras, while extremely useful for imaging in constrained scenarios, struggle with resolving scenes with large depth variations. To resolve this, we propose imaging with a set of mask patterns displayed on a programmable mask, and introduce a computational focusing operator that helps to resolve the depth of scene points.
Programmable Masks
More Info
Exploiting ideas in plane-sweep stereo, we propose to regularize the depth recovery using measurements made from a translating mask and processed by a computational focusing operator.
Fast Reconstruction
More Info
We show that a computationally intensive multi-image recovery procedure can be decoupled into a collection of single image deconvolutions. This provides significant computational benefits especially when the scene has content on a large number of depths.
High Quality Imaging
More Info
On a lab prototype, we demonstrate that programmability of the mask enhances the quality of image reconstructions, especially when compared to state-of-the-art lensless imagers and their associated algorithms.
Depth-aware Lensless Imaging
As a result, the proposed imager can resolve dense scenes with large depth variations, allowing for more practical applications of lensless cameras. We also present a fast reconstruction algorithm for scene at multiple depths that reduces reconstruction time by two orders of magnitude. Finally, we build a prototype to show the proposed method improves both image quality and depth resolution of lensless cameras.
Kernels and their evolution. Top row shows PSF for three different depth. Second row shows each PSF correlated with PSF from z0 =6.8cm, as kernels underlying blocks in the Gram matrix from Section 4.1. Third row shows applying deconvolution kernel for PSF at z0 on PSFs of different depth; the result is high frequency artifacts for directly applying deconvolution kernel on captured measurements. Last row shows applying deconvolution kernel for PSF at z0 on PSFs of focused measurements
Captured and focused measurements from our lab prototype for scene with content on two planes.
Comparison of different number of measurements and baseline on simulated data
Comparison of different reconstruction methods on real data. As shown in (a), the scene contains two transparencies printed with boat pattern. White is printed to be transparent. Near plane is at 2.8cm while the far plane is at 18cm. (b)(c)(d) show various reconstruction techniques from static mask measurements.
Estimated depth for objects with known geometry. From top to bottom: a slanted plane, corner of a box, and a cylinder. Objects are covered with patterned paper to produce dense texture. (b)-(c) show a image from the focal stack at the same depth; column (d)-(f) show estimated depth estimated from focal stack with corresponding method.
People
Yi Hua
Shigeki Nakamura
M. Salman Asif
Aswin C. Sankaranarayanan