Mission

We will develop computational foundations of photo-scatterography (CPS) to enable building a wearable vascular monitoring device that can measure blood dynamics, structure and composition non-invasively, continuously and safely. The impact of this new near-magical ability on both biological sciences and human healthcare could be revolutionary. The approach we propose will tag and measure photons based on various properties such as spectrum, time-of-travel, light-field, phase, and coherence.

The team members are leading experts in using one or more of these dimensions of light for imaging through a scattering medium. Click on the view graph below to know more about how various properties of light aids photo-scatterography. Related past projects from the team demonstrating our technical acumen are provided.

temporal.jpg

Temporal Dynamics

Time of Travel

phase.png

Phase

spectrum.png

Spectrum

lightfield.jpg

Light-field

confocal.png

Confocality & Focal-stack

 

Temporal Dynamics

Temporal dynamics of the fluids in the vasculature can provide us information such as blood perfusion, pulse-rate, heart-rate variability. The team has expertise in both imaging and estimating temporal dynamics:

 
 
skin_perfusion.jpg

Skin Perfusion Photography

This project develops a new computational imaging technique to measure the speed of blood flow in skin tissue (known as perfusion). We produce speed maps where different colors indicate different speeds of flow.

 
pulsecam.png

PulseCam

Blood perfusion is the flow of oxygen-rich blood to the end organs and tissue through the blood vessels in the body. It is vital in ensuring oxygen delivery to the cells and in maintaining metabolic homeostasis. Measuring peripheral perfusion (or microcirculation) is needed to monitoring critical care patients in ICU, and to diagnose peripheral arterial disease. Significant research has demonstrated that regional blood flow is not well-captured by currently measured macro-circulatory parameters, like blood pressure and heart rate. Therefore, critical care doctors require information about micro-circulatory parameters to quantify regional blood flow in different body parts. But existing point modalities to measure regional blood flow such as laser Doppler flowmeter and perfusion index (measured using pulse oximeter) are not effective as they cannot capture spatial variations in blood flow which has diagnostic value. Therefore, one needs an imaging device which can continuously measure maps of regional blood flow.

 

CameraVital

Measuring and monitoring any patient’s vital signs is essential for their care – in fact, all care first begins by collecting vital signs like heart rate and blood pressure. The current standard of care is based on monitoring devices that require contact – electrocardiograms, pulse-oximeter, blood pressure cuffs, and chest straps. However, contact-based methods have serious limitations for monitoring vital signs of neonates as they have extremely sensitive skin. Most contact-based vital sign monitoring techniques result in skin abrasions, peeling and damage every time the leads or patches are removed. This further results in potentially dangerous sites for infection increasing the mortality risk to the neonates. We propose to use normal camera to measure the vital signs of a patient by simply recording video of their face in a non-contact manner. From the recorded video of the face, our algorithm, distancePPG, extracts pulse rate (PR), pulse rate variability (PRV) and breathing rate (BR). The algorithm is based on estimating tiny changes in skin color due to changes in blood volume underneath the skin surface (these changes are invisible to the naked eye, but can be captured by a camera).

 

Phase

Due to the wave-nature of the light, light can be more accurately described using both intensity and phase information. Using the coherence property, the team has demonstrated imaging deep in the scattering medium using transmission matrices.

 
 

Coherent Inverse Scattering via Transmission Matrices

A transmission matrix describes the input-output relationship of a complex wavefront as it passes through/reflects off a multiple-scattering medium. The double phase retrieval method is a recently proposed technique to learn a medium’s transmission matrix that avoids difficult-to-capture interferometric measurements. Unfortunately, to perform high-resolution imaging, existing double phase retrieval methods require (1) a large number of measurements and (2) an unreasonable amount of computation. The team focused on the latter of these two problems and reduced computation times with two distinct methods: firstly we develop a new phase retrieval algorithm that is significantly faster than existing methods, especially when used with an amplitude-only spatial light modulator (SLM). Second, we calibrate the system using a phase-only SLM, rather than an amplitude-only SLM which was used in previous double phase retrieval experiments.

phase-x7obbs.jpg

Inverse scattering algorithms

We combined physics and learning to create efficient and accurate heterogeneous inverse scattering algorithms.

From the traditional auto-encoder architecture to the proposed physics-aware architecture (left); clock-wise from top left: thick smoke cloud and reconstructed density, albedo, and phase function (right).

From the traditional auto-encoder architecture to the proposed physics-aware architecture (left); clock-wise from top left: thick smoke cloud and reconstructed density, albedo, and phase function (right).

 

Light-field

Light-field, or plenoptic imaging allows us to refocus post-capture. As the micro-vascular structure is located at multiple depths, self-occluding, and very thin, using a fixed-focal-length lens or a small-aperture all focus lens creates a loss of information in imaging micro-vasculatures. Therefore, capturing the entire light-field allows us to 3D reconstruct the vasculature structure. The team pioneers in fabricating angle-sensitive pixels that can capture the entire light-field from a single measurement.

 
 
light_field-1n05ina.jpg

Design and Characterization of Enhanced Angle Sensitive Pixels

Angle-sensitive pixels are micro-scale devices which capture information about both the intensity and incident angle of the light they see. These pixels acquire a richer description of incident light than conventional intensity-sensitive pixels. We demonstrate a light-field camera using an image sensor composed of angle-sensitive pixels and a conventional camera lens. Single images captured by our camera can directly be used for both computational refocus for enhanced depth of field and depth map generation.

CMOS Angle-Sensitive Pixel Design for Bioscatterography

light_field2.png

Time of Travel

The travel time of reflected photons, ballistic photons (photons which interacted with the tissue only once), and heavily scattered photons is different. Though this time difference is very small, advances in the imaging speeds allow us to differentiate the nature of the photons at the sensor. Therefore, we propose to tag the photons based on their time-of-travel. In the past, the team has developed a set of imaging systems and algorithms that measure the time-of-travel of the photons.

 
tof1.jpg
 

Seeing through obstructive materials with All Photons Imaging (API)

This project allows us to see through scattering materials using ultra-fast imaging. The method uses the entire optical signal leading to a new approach called “All Photon Imaging”. Photons with different time-of-travel suffer different degradations (blur). Hence, by separating the photons based on the travel-time and deblurring them separately, the API computationally images through the scattering medium. The proposed technique finds applications in medical imaging, seeing through fog, and seeing through other obstructive materials.

tof2.jpg

Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion

This project demonstrates a new method to detect and distinguish different types of fluorescent materials.

tof5.jpg

Looking around corners

The project develops a camera that can look around corners and beyond the line of sight. The camera uses light that travels from the object to the camera indirectly, by reflecting off walls or other obstacles, to reconstruct a 3D shape.

tof3.jpg

Occluded Imaging with time of Flight Sensors

This project explores the question of whether “phase based Time of Flight (ToF) range cameras can be used for looking around corners and seeing through scattering diffusers.

Reading through a book using ThZ

The project describes a new imaging system that can read through the pages of a closed book. This system has applications in industrial inspection and in history and preservation, allowing us to read through closed antique documents or inspect samples of cultural value.

Kinect to Detect Cancer Cells

The project – Blind and Reference-free Fluorescence Lifetime Estimation via Consumer Time-of-Flight Sensors, outlines a novel, cost-effective solution for fluorescence lifetime imaging (FLI). Fluorescence lifetime imaging (FLI) is a well-established imaging parameter that finds important applications across several areas of life sciences. Examples include DNA sequencing, malignant tumor detection and super-resolution microscopy. Generally, FLI is performed using sophisticated electro-optical instruments which are expensive and cost in the range of several thousands of dollars. This work shows that it is possible to trade-off the precision of electro-optical instruments with sophistication in computational methods used for lifetime estimation purposes. To this end, our team members repurpose low-cost, time-of-flight or ToF sensors (e.g., Microsoft Kinect) for FLI.


 

Spectrum

The scattering parameters of a medium and the imaging resolution have a complementary relationship with the wavelength of the incident light. If the wavelength of the light is increased, photons penetrate deeper into the scattering medium; however, the imaging resolution decreases due to the diffraction limit. Decreasing the wavelength of the light improves the imaging resolution at the cost of decreased depth-of-penetration. We propose to employ multiple wavelengths to get best of both the worlds where we can image deeper, and also at better resolutions. In the past, the team demonstrated subsurface vein imaging enhancement using multi-spectral light and structured light.

 
 
spectrum-uw2fim.jpg

Subsurface Vein Imaging Enhancement

Direct/global image separation is achieved through the use of high-frequency coded illumination patterns – in this context, a high-frequency checkerboard. After several images are taken of this illumination at different shifts across the scene, the direct and global components are computationally reconstructed.

Enhancing OCT depth penetration

The team designed with interferometric imaging systems that improve the sensitivity and depth-penetration of techniques such as optical coherence tomography.

Prototype fiber-interferometric imaging system, using a balanced detector, and combining optical coherence tomography, confocal imaging, and epipolar probing (left); measurements of optically thick homogeneous scattering sample (density σt = 100 mm-1) with different configurations of our system (right).

Prototype fiber-interferometric imaging system, using a balanced detector, and combining optical coherence tomography, confocal imaging, and epipolar probing (left); measurements of optically thick homogeneous scattering sample (density σt = 100 mm-1) with different configurations of our system (right).

Programmable OCT

We developed the imaging systems that combine the flexibility of programmable structured light, with the sensitivity and resolution of optical coherence tomography.

Prototype imaging system that can perform full-frame OCT with arbitrary spatiotemporally coded illumination.

Prototype imaging system that can perform full-frame OCT with arbitrary spatiotemporally coded illumination.

Programmable spectral illumination and imaging

We developed imaging systems that produce arbitrary illumination and sensor sensitivity functions with arbitrary spectral characteristics.

spectrum4.png

Confocality & Focal-stack

 

Confocality

Confocal imaging is traditionally used for imaging through a scattering medium. Confocal microscopy decreases scattering during illumination and also during imaging with the help of pinhole (or bucket detector) combined with a large aperture lens. In the past, the team demonstrated a scanning confocal endomicroscope (SCEM) using the principles of confocal imaging. The details are as follows:

confocal-2blec2r.jpg

Endomicroscopy

Endomicroscopy provides clinicians a powerful tool to visualize tissue architecture and cellular morphology for early cancer detection. Due to its minimal invasiveness, probe-based endomicroscopy is widely applicable in the detection and evaluation of neoplasia in many sites, such as the gastrointestinal tract, cervix, pancreas and lung. Since cancer progression is usually associated with increased cell density that impairs image contrast, optical sectioning can be introduced to reject out-of-focus signal and improve the axial resolution. Previously, a differential structured illumination microendoscopy (DSIMe) was developed in our team using a reflective spinning disk to better visualize neoplasia-related alterations. Here, we present the first line-scanning confocal endomicroscope based on a digital light projector (DLP) and a CMOS camera without the need for mechanical scanning. In this novel scanning confocal endomicroscope (SCEM) as shown in the left figure below, the rolling shutter of a CMOS detector is used to achieve versatile slit detection without the need for a physical aperture. On the illumination end, a digital light projector can be synchronized as a spatial light modulator to project matching illumination lines and perform confocal imaging. Our ex vivo and in vivo validations demonstrate that the SCEM improves the visualization of cell architecture with optical sectioning, especially in crowded regions, when compared with a non-confocal endomicroscope (right figure). Also, the quantitative analysis reveals enhancement in parameters such as the nuclear to cytoplasmic ratio, which can potentially facilitate automated objective diagnosis based on cell density and morphology. Built in a compact enclosure at a low cost (<$5,000), the SCEM offers an opportunity to provide real-time histological information with enhanced contrast and can potentially contribute to improved cancer detection in community and low-resource settings.

Focal-stack

The micro-vascular structure is located at multiple-depths, self-occluding, and very thin. Hence, using a fixed-focal-length lens or a small-aperture all focus lens creates loss-of-information in imaging micro-vasculatures and we need images captured at different focal length settings.

focal_stack_1-21leigh.jpg

Real-time Analysis of Microvascular Blood Flow

In the past, the team demonstrated 3D-reconstruction of micro-vessels from a set of images taken with different focal settings.

focal_stack_2-2hv06kf.jpg

3D Reconstruction of Micro-vessels

In the past, the team demonstrated 3D-reconstruction of micro-vessels from a set of images taken with different focal settings.

The team designed a non-invasive camera that allowed for the vein visualization, which operated in real-time, at different body parts and was robust to different types of skins.