Technical insights, news, videos
Oct 8, 2021

While the term 3D sensing might appear complex at first the basic concept behind it is fairly easy to approach. When we think of traditional cameras or sensors the practical information, we are getting of an object is limited. At most a sensor will let you know the distance between itself and another object, and we can use photographs to compare the scale of various items in the frame.

On the other hand, 3D sensing technologies are designed to truly interpret the three-dimensional properties of objects or individuals in front of them. So, a 3D sensing camera obtains a sense of depth, length, width, and even analyzes features and shapes if you are in front of it. Compared to classic cameras that only get “flat” information of their environment three-dimensional cameras can properly analyze a 3d environment and this comes with countless new applications for cameras like face recognition.

While it would be easy to consider 3D sensing technologies as futuristic, depth cameras are already available in the market, and more and more brands and industries have started implementing this technology. So, for today we’ll take a look at the main types and applications of 3D sensing and discover just how this technology is changing the world.

Types of 3D Sensing

3D sensing technologies come in various shapes, and despite similar results, the processes behind them vary considerably. So, to properly understand these systems let’s take a look at the most common types of 3D sensing available.

Stereoscopic Vision

Stereoscopic vision is one of the most interesting applications of 3D sensing technology, as in broad terms it interprets information the same way our eyes do.

Stereo vision relies on two separate cameras which will capture the same image, however, the cameras themselves are slightly offset, a positioning that once again reminds of human eyes. Each camera captures an image of the object in question and then those two images are combined through the use of specialized software.

Since the two pictures of the object are ultimately at slightly different angles this software takes notice of the small disparities and proceeds to make a 3D image out of this information.

While stereo vision is an effective and fairly straightforward method of 3D imaging it does have a few limitations to consider. First and foremost, it’s not a real-time process. The depth calculation does require some time to properly generate the 3D image and speeding this process can result in a lower quality of the reconstruction process.

Assisted stereoscopic vision additionally adds a laser projection module to the setup which projects dots on top of the environment and object, which allows the camera to focus more easily and provides another point of comparison for the depth calculation.

Structured Light Pattern

The structured light pattern is another form of 3D sensing that relies on triangulation to measure depth, however where stereo vision requires two cameras, structured light allows us to employ 3D sensing technologies with a single camera

Traditionally structured light pattern relies on a laser projection module and a camera. The module projects a light pattern with a simple shape, (traditionally lines or squares) over the object and creates a distorted pattern on the photography area. Then a camera that is mounted triangularly from the module proceeds to capture the reflected light.

Triangulating the information from both the projector and camera allows the relatively simple setup to properly analyze and obtain the 3D coordinates of a dedicated object with a single camera and in a relatively fast manner.

While the setup of structured light pattern might sound bulky the reality is that it’s already been successfully miniaturized in the smartphone market. The iPhone X counts with what it calls a “True Depth Camera” to scan the face of its users. In reality, this 3D sensor is a structured light pattern setup that uses both the frontal camera and an infrared emitter that projects dots on the user’s face. The iPhone X then compares the information of the camera and infrared to 3D scan the face and determine if the user in front of the camera is the phone’s owner.

Time of Flight (ToF)

While the above methods of 3D sensing focused on triangulation and used a principle similar to human sight, Time of Flight technology instead relies on light and its travel speed to determine depth

Direct Time of Flight systems employs short pulses of light in timed intervals. The system then measures the return time of the reflected light to calculate the distance of an object from the light source. Timing the reflected light allows the system to calculate distances just like a sonar would and understand the overall volume of nearby objects. However, the resolution will ultimately depend on the volume of light and the calculation speed of the system

Indirect Time of Flight systems on the other hand uses a continuous source of light with a set frequency. However, instead of measuring distance through return speed these systems instead analyze the phase of the returned light. Since the light has a continuous frequency, it is possible to discern distance from the phase alone. Though as a result of this Indirect Time of Flight excels at short range, with a practical range of roughly 30 meters.

What are some typical 3D sensing applications?

The relative simplicity of 3D sensing systems has resulted in practical applications of the technology becoming commonplace in recent years. Above we saw an example of face recognition in the iPhone X, but structured light is a reliable method to provide 3D camera computer vision to any device. In consumer electronics like phones and laptops, facial recognition, gesture recognition, and eye-tracking all rely on 3D sensing technologies and are currently present on almost all modern smartphone devices.

Drones and other automatic machinery also employ 3D sensors to avoid collisions, by having an estimated distance from other objects calculated in real-time these robots can navigate their environments safely even if new objects enter their path.

3D sensing even sees use in 3D printing and the automotive industry. Generally speaking, any digital system can use 3D sensing to interact with its user, and any machinery that needs to move can also take advantage of it to plan its route and movement

Conclusion

Traditionally cameras have operated in a 2D environment which limits their practical applications in more complex systems, however, modern 3D sensing cameras not only are efficient but are also relatively straightforward in build and operation. Simple setups allow computer and robotic systems to interpret their environment in 3D, and the technology has progressed so fast that we are already seeing it in our phones and laptops.

3D sensing technology is an important breakthrough in the field of human-system interactions, and while it’s far from perfect, it’s already seeing multiple practical applications. 3D sensing is here to stay, and it won’t be too long before it becomes commonplace in society.