Technical insights, news, videos
Dec 28, 2021
  1. What is a ToF?
  2. How does a Time of Flight Sensor Works?
  3. Point cloud data
  4. ToF vs LIDAR
  5. Using ToF and RGBD cameras
  6. What Are the Benefits of Using ToF?
  7. Key Application Areas
  8. 3D time-of-flight
  9. ToF Examples
  10. Summary

A Time of Flight Sensor or Time of Flight Camera is a technology system that is designed to provide accurate answers to a single question: “How far is this object?”

When we think of traditional cameras there is one undeniable truth beyond the HD definition and vivid colors, and that is the fact that they don’t convey a sense of distance. Cameras, unlike human eyesight, can’t interpret depth on their own, and to overcome this obstacle and allow camera-reliant machinery to better interact with their environment ToF sensors had to be developed.

Reference: https://www.youtube.com/watch?v=cCsqSjelyk8

What is a ToF?

Time of Flight as the name implies refers to the amount of time it takes a specific object or element to fly a given distance. However traditionally ToF specifically refers to the time it takes for a given wave or particle to travel to a specific object and bounce back to the source it originated from. Ultimately, it’s a very simple principle that relies only on movement mechanics, however through the interpretation of these results and comparing them to the expected behavior and speed of the particle it becomes possible to learn more about an object in question or the particle used in the process.

ToF can help provide information on a given wave or particles like flow rate or composition, but its most popular application comes in the form of ToF cameras designed to interpret and analyze distance for sensitometry and mapping.

Reference: https://www.youtube.com/watch?v=cCsqSjelyk8

How does a Time of Flight Sensor Works?

A Time of Flight sensor works on a simple principle: A particle or wave is emitted from the device, which then comes into contact with nearby objects and bounces back to the source. The time it takes for the wave to return to the source is then registered and compared to the expected speed of the wave. By making a simple calculation the system can determine its distance from the source and

Reference: LG

Traditionally this is done with a ToF laser sensor. The camera fires out an infrared light from the laser which bounces back almost immediately ensuring that depth can be calculated almost instantly and without the need for bulkier or more complicated machinery. There are various methods for depth measurement and not all ToF sensors need to directly rely on infrared, but that array is one of the most popular ones and it’s very easy to grasp and understand as well.

Reference: seeedstudio

Point cloud data

Point cloud data refers to a series of points that are used to represent the coordinates of a given object in a virtual space. An easy way to understand point cloud data is to compare them to pixels. On a screen, pixels have a positional value and a color value which allows a computer or cellphone to display a complete image. The computer doesn’t have a complete replica of the object being displayed, but that specific information allows us to take a look at a perfect representation of it.

Point cloud data similarly has a positional value for each point, but these points all have positional information on all 3 axes, meaning that they can be arranged in a 3D space and replicate 3D environments as well. As such as point cloud data can be best seen as the collection of depth values that were obtained through a ToF sensor or other similar technologies.

ToF vs LIDAR

LiDAR (Light Detection and Ranging) is a methodology that consists of employing laser technology for imaging and mapping in real-time, usually through the creation of a point cloud map. This of course sounds incredibly similar to traditional Time of Flight, which brings up the question of what makes them different?

The first thing we need to understand is that LiDAR is an application of ToF technology. Meaning that all forms of LiDAR use ToF principles, but not all ToF sensors are a form of LiDAR. The main way to differentiate them is to remember that ToF doesn’t need to rely solely on lasers and ToF calculations can be done with traditional RGB cameras. This also means that the resulting images tend to be very different as LiDAR doesn’t register visual information or color.

Using ToF and RGBD cameras

So now that we have cleared up that ToF cameras can operate without lasers it’s a good idea to expand on this concept and truly grasp how these devices work. As we saw above LiDAR and other technologies that rely solely on lasers can’t create a complete visual representation of their environments, which is why simultaneous usage of ToF and RGBD technology is ideal when you want to construct a complete 3D representation of an object or environment.

Visual information is of course handled by the traditional RGBD technology however cameras also have various options to register ToF without relying on lasers. Usually, this is done through light, be it infrared or otherwise. The camera uses a source of light it shoots on its own as the wave to be analyzed and based on the time it takes to return the camera can provide a depth value to each area on the resulting image.

Reference: https://www.youtube.com/watch?v=cCsqSjelyk8

What Are the Benefits of Using ToF?

Using ToF over other similar technologies has various advantages, however, chief amongst them is its overall simplicity. Alternative methods of 3D imaging and depth perception can be bulky, require moving parts, and overall have an increased complexity compared to the simple array ToF employs. As such ToF technology can be installed easily on most modern cameras and is finding a place even on smartphone camera arrays.

ToF also has the clear advantage of being efficient and fast. ToF calculations are very easy to make meaning that they don’t demand much processing power and as such, it takes much less time for the system to complete its imaging processes. When combined with RGBD technology ToF also provides the advantage of being able to capture color and visual data, allowing the resulting images to be more than organized coordinates.

Key Application Areas

ToF mainly provides a way to interpret depth and distance almost instantly which means it has multiple applications in the field of robotics. Self-driven cars are possibly some of the most popular and impressive applications of ToF, as the input of LiDAR and ToF sensors is essential for vehicles to understand the distance of other vehicles relative to them as well as their speed. ToF is an essential input for autonomous vehicle sensometry and one of the key players in the viability of these new technologies.

Reference: Machine Vision Blog

However not all applications are tied to robotics either, and chances are you are already a user of ToF technology through your smartphone. We mentioned before that phones can employ ToF and they use it for daily tasks like facial recognition and gesture control. The addition of depth allows for your phone’s camera to better interpret the world in front of it, which makes it possible for a phone to accurately interpret faces and gestures in a way that wouldn’t be possible without depth.

3D time-of-flight

ToF 3D is a specialized form of the aforementioned technology which relies on a 3D Time of Flight camera to obtain information and imaging of multiple objects in a 3D environment at once. While this might sound like a drastic change in technology it all ultimately comes down to the way ToF is calculated and the source emitted.

In 3D ToF the device uses light pulses in durations of nanoseconds to capture depth information of an entire area. Each light pulse is a wave that the device registers and due to the way, these pulses spread it becomes possible to interpret and compile depth information on a wide area. The overall principle remains the same but the range of application tends to be wider.

Reference: Analog Devices

ToF Examples

We’ve seen some of the practical applications of ToF already, but that was only touching the surface of all the ways this technology is being used. The most popular example of ToF technology is easily LiDAR which was introduced as early as the 60s and continues to be widely used today in the field of robotics. The second most popular example of ToF cameras is smartphones, while not all phones in the market count with ToF cameras they are becoming more commonplace and are usually the most accessible ToF device in the market. However, ToF has also been widely used in recreation and entertainment with popular tracking hardware like the Xbox Kinect using ToF technology to integrate movement and video games as early as the year 2014.

Summary

Time of Flight is at the end of the day a fairly simple concept that allows us to interpret depth based on the distance that it takes a wave to bounce into an object and back into its source. However, its simplicity is ultimately the key to its success. ToF is generally speaking the most efficient and approachable technology to calculate the depth and this means that it’s easy to install on most photographic devices and can be easily used alongside other technologies.

ToF is efficient and reliable, and its myriad of applications in day-to-day life proves that sometimes simpler solutions are undeniably the best ones.

Speaking of localization and positioning technologies, ToF sensors occupy the same niche as lidars. Dioram SLAM One allows you to mix data from different sources. Visual-inertial technology requires a depth map or point cloud by any means available – be it lidar, ToF sensor or depth recovery from stereo visual data. Dioram solutions support all kinds of data sources.

ToFs can be tricky under certain conditions. At the same time, thanks to sensor fusion, the use of visual data can improve the accuracy and expand the scope of sensors.