Microsoft patent application US20110310226 "Use of wavefront coding to create a depth image" by Scott McEldowney proposes a fresh idea to acquire image depth information.
Here is the original description:
"[A] 3-D depth camera system includes an illuminator and an imaging sensor. The illuminator creates at least one collimated light beam, and a diffractive optical element receives the light beam, and creates diffracted light beams which illuminate a field of view including a human target. The image sensor provides a detected image of the human target using light from the field of view but also includes a phase element which adjusts the image so that the point spread function of each diffractive beam which illuminated the target will be imaged as a double helix. [A] ...processor ...determines depth information of the human target based on the rotation of the double helix of each diffractive order of the detected image, and in response to the depth information, distinguishes motion of the human target in the field of view."
Actually, it's much easier to understand this idea in pictures. Below is the illuminator with a diffractive mask 908:
There is another mask 1002 on the sensor side:
Below is the proposed double-helix PSF as a function of distance. One can see that the two points line angle changes as a function of depth:
The orientation angle of the PSF points depends on wavelength (not shown here, see in the application) and the distance (shown below):
From this angle the object distance can be calculated - this is the idea. Microfoft gives an image example and how it changes with the distance in what looks like Wide-VGA sensor plane:
Update: As written in comments, University of Colorado, Denver has been granted a patent US7705970 on a very similar idea. A figure in the patent looks very similar:
Here is the original description:
"[A] 3-D depth camera system includes an illuminator and an imaging sensor. The illuminator creates at least one collimated light beam, and a diffractive optical element receives the light beam, and creates diffracted light beams which illuminate a field of view including a human target. The image sensor provides a detected image of the human target using light from the field of view but also includes a phase element which adjusts the image so that the point spread function of each diffractive beam which illuminated the target will be imaged as a double helix. [A] ...processor ...determines depth information of the human target based on the rotation of the double helix of each diffractive order of the detected image, and in response to the depth information, distinguishes motion of the human target in the field of view."
Actually, it's much easier to understand this idea in pictures. Below is the illuminator with a diffractive mask 908:
There is another mask 1002 on the sensor side:
Below is the proposed double-helix PSF as a function of distance. One can see that the two points line angle changes as a function of depth:
The orientation angle of the PSF points depends on wavelength (not shown here, see in the application) and the distance (shown below):
From this angle the object distance can be calculated - this is the idea. Microfoft gives an image example and how it changes with the distance in what looks like Wide-VGA sensor plane:
Update: As written in comments, University of Colorado, Denver has been granted a patent US7705970 on a very similar idea. A figure in the patent looks very similar:
Microsoft Proposes Double Helix PSF for Depth Sensing
Reviewed by MCH
on
December 28, 2011
Rating:
No comments: