Looks like there's (much) less than a thousand dots involved. Apple didn't actually claim that 30,000 dots hit our face. It's typical Apple PR speak designed to make something sound more than it is. Then, using the assumption that surrounding dots cannot be too different in depth, it works its way outward from the primary dots to their surrounding dots, "growing" more depth info as it goes. Having this image, the processing hardware inside the camera then looks at small windows of the captured image and attempts to find the matching dot pattern in the projected pattern. (Apparently you can ignore Y offsets from depth. Instead, the Kinect 1.0 sensor makes use of a pseudo-random dot pattern produced by the near-infrared laser source that illuminates the entire field of view as illustrated in Fig. Their X offset is compared against the stored flat template to determine primary depth markers. When later the system is trying to recognize you, it searches for key dots. (The Kinect pattern used different size dots as well, meant for different depth fields.) The original worked something like this: during assembly, the subsystem is forced to project the dot pattern at a test target, and the device-specific pattern viewed by the camera is burned into the iPhone as a flat template to check against. Which is not a surprise, as it was licensed technology from a company called Prime Sense, which Apple later bought in 2013.įrom what I gather, the primary reason for the speckled pattern is that it's easier to recognize which particular dot you're looking at, from the pattern of its surrounding dots. The original 2010 Kinect had the same type of pattern.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |