In recent years, depth sensing has become essential for a variety of new significant applications. For example, depth sensors assist autonomous cars in navigation and in collision prevention. The physical constraints on active depth sensing mobile devices, such as light detection and ranging (LiDAR), yield sparse depth measurements per scan. This results in a coarse point cloud and requires an additional estimation of missing data. Traditional LiDARs have a restricted scanning mechanism. Those devices mea sure distance in specified angle intervals, using a fixed number of horizontal scan, depending on the number of transceivers. An emerging technology uses solid – state depth sensors, based on optical phased – arrays with no mechanical parts. In addition, these devices are relatively inexpensive compared with those currently in use. This calls for the development of new, efficient, sampling strategies, which reduce the reconstruction error per sample.
The technology is based on solid-state LiDARs, the scanning pattern is image-driven and changes with time. The method comprising receiving an image of a scene, segmenting the image into a plurality of segments, obtaining at least one depth sample from each of at least some of the segments, and with respect to each of the at least some of the segments, assigning a value of the depth sample to each pixel in the segment, to create a depth image of the image.
- Increases the effective resolution of the system, allows either less power or smaller devices for a given accuracy or to reduce the depth reconstruction error
- Design in real-time the adaptive sampling patterns, based on the current scene, such that the sparse depth can be reconstructed in the most effective way (increased resolution and depth accuracy)
Applications and Opportunities
- The device can be highly applicable for robotic and autonomous navigation, which has a large commercial market