Technical insights, news, videos
Oct 8, 2021


If you are involved in the world of automated robots or vehicles then chances are you’ve heard the term SLAM being brought up more than a few times. However, SLAM can be a complicated technology to properly grasp, so today we’d like to share some key concept to understand what exactly is this technology and what does it offer in our modern world.

SLAM or Simultaneous Localization And Mapping is a technology that is designed to allow an agent to map its surrounding environments while simultaneously allowing it to navigate those environments. We specifically use the term “technology” because SLAM is more than a single software or algorithm. SLAM is a continuously developing series of techniques that are being constantly polished to run on multiple software platforms and adopt unique algorithms based on the needs of the environment and machine.

In short SLAM technology allows an automated agent like a car or an automated robot to map their environment in real-time to be able to navigate it safely and reliably. This ultimately means that SLAM has two main functions, those being: Mapping environments with the assistance of automated machinery, and allowing automated machinery to understand its location relative to its surrounding environment.

Mapping

As we just mentioned mapping is one of the primary functions of any SLAM system, but what makes the SLAM method so different from traditional mapping? First, let’s understand how traditional mapping used to work.

If there’s one constant in the history of mapping is that it tends to be slow and bulky. For most of our history mapping relied on manual measuring with tape and laser pointers. Eventually, mapping technology evolved with the invention of total stations and terrestrial laser scanners. But these devices were heavy, large, and had to remain fixed on their tripod.

SLAM mapping on the other hand allows a single operator with a dedicated rig or even an automated agent to map an environment on their own. SLAM uses cameras and sensors that together with the SLAM algorithm can interpret and map the world in real-time. Not only is the process generally faster, but it also has the advantage of being mobile. A single operator can complete a SLAM 3D map on his own simply by walking around an area, trivializing what used to be a colossal task.

While the calculations SLAM relies on are complex and require an advanced algorithm that interprets visual and non-visual information in real-time to estimate distances, the operation of SLAM technologies themselves is relatively easy. So, SLAM mapping firmly stands as the best alternative in the modern market to map both small- and large-scale areas.


Reference: Erik Nelson, University of California, Berkeley

Localization

While its real-time mapping is the most immediately impressive feature of SLAM systems, this doesn’t mean SLAM technology is limited only to mapping new roads or buildings. Like we mentioned SLAM allows an agent to not only map its surrounding environment but also to understand its location relative to it. In other words, SLAM is also a great navigational tool, and for most of us, this will be the most relevant use of this technology.

A robot using SLAM can form its path and successfully navigate an environment even if it’s the first time it explores it. We tend to overlook how complex it is to move around a 3d environment since we can simply observe everything around us, but machines don’t have “sight” built on them per se, so they need SLAM to be able to understand their surrounding world at all.


Reference: NVIDIA Isaac platform for robotics at GTC 2018

Now, the term robot can often feel futuristic and removed from our times, but chances are you have already interacted with a robot that uses SLAM technology. If you own a Roomba or another automatic vacuum cleaner you have a SLAM robot already in your house. A lot of automatic cleaners heavily promote the fact that they can understand your living room and not only avoid crashes but plan a cleaning route for themselves. This is done through SLAM algorithms and is a very clear example of how the technology can help machines locate themselves in a 3D space.

And SLAM applications are only bound to increase in number. Roombas might be a nice novelty, but automated vehicles are undeniably the way of the future. And just like Roombas automated cars fully rely on SLAM technology to be able to navigate complex roads in real-time. SLAM is here to stay, and its practical applications just can’t be denied.

Types of SLAM

When we were just starting, we brought up that there are various ways to use SLAM and that it’s software and even components can considerably change between interactions. SLAM is first and foremost a way to interpret information, meaning that the technology and calculations employed to achieve this can vary.

That said when it comes to the hardware there is an easy way to categorize the different types of SLAM that are employed nowadays, and it all comes down to the sensors they employ. So, let’s take a closer look at Visual SLAM and LiDAR SLAM to understand how the type of sensor changes the way SLAM works.

Visual SLAM

The first type of SLAM we’ll be taking a look at is Visual SLAM, which as the name implies relies mostly on visual data to perform estimations on the distance of nearby objects.

Visual SLAM or vSLAM for short refers to SLAM systems that use cameras and image sensors to gather information on their environment. And while it would be easy to assume that SLAM requires incredibly advanced cameras due to its complex functions, this couldn’t be further from the truth. vSLAM systems can reliably map their environment with simple single-lens cameras. Though some systems do use more advanced technology like compound eye cameras and RGB-D cameras, the latter of which are designed to sense depth.

The main advantage of vSLAM is that it has a lower cost than the alternatives. Like we brought up vSLAM cameras don’t have to be overly complex, and this means that vSLAM systems can be built with relatively inexpensive cameras and still function well. On top of that, there’s the simple fact that cameras capture a lot of information. The cameras in a vSLAM system can easily detect landmarks and provide an estimation of the distance to them from multiple positions, providing flexibility when it comes to mapping out these complex 3D spaces.

SLAM systems can even run with a single camera, in which case they are known as Monocular SLAM. However, while this layout makes SLAM technology even more affordable a Monocular SLAM setup will need assistance from additional sensors and human input to provide a truly accurate map of its surrounding.

LiDAR SLAM

The other type of SLAM we need to take a look at is LiDAR SLAM. LiDAR is an acronym that stands for both “Light Detection and Ranging” and “Laser Imaging, Detection, and Ranging”. LiDAR is a method that allows us to measure the distances between objects through the use of laser, allowing for accurate and fast measuring as well as to make 3D representations of space. So, in short, LiDAR SLAM is a type of SLAM that uses lasers as its main sensor system.

The main advantage LiDAR SLAM system offer is their improved precision. Lasers can get accurate readings of distance compared to the estimations that cameras rely on, meaning that LiDAR SLAM systems are ideal for fast-moving agents like cars and drones who need to be able to move precisely at high speeds.


Reference: Smart Mobility Research Team, Robot Innovation Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan

LiDAR SLAM can be configured to analyze only 2 axes (X, Y) or to interpret all 3 axes (X, Y, Z). Additionally, they use what is commonly known as “point clouds”, data points in 3D space that represent the location of objects in the environment. So, LiDAR SLAM systems can keep vehicles on a safe track and still form a proper internal map of the locations it just traversed. Point clouds also provide a simple way for the vehicle to understand its location as it can calculate its movement compared to these points.

Despite its advantages, it’s undeniable that point clouds are simply not as detailed as traditional photography. A LiDAR SLAM map is harder to interpret and when small objects are involved the vehicle might lose track of its location related to its surrounding. Additionally, cloud point matching is a demanding task, so the system will demand a lot of processing power to work as intended.

Challenges with SLAM

While SLAM is a promising technology and already has a fair amount of practical applications this doesn’t mean it’s completely flawless. Every developing technology faces its own set of challenges, and SLAM is no exception.

The most common issue with SLAM is that it’s prone to accumulate location errors. We touched on this when we discussed LiDAR SLAM, but even in slow-moving agents, this can be an issue. The simple fact is that every sensor will have some margin of error, and since the SLAM algorithm continually compares landmarks, these small deviations can accumulate with time.

SLAM also has occasional issues detecting the movement of its agent resulting in unexpected localization fails. And last but not least since SLAM has a lot of complex calculations running in the background these systems can demand a lot of processing power.

However, while these challenges are real, they aren’t insurmountable. Manual adjustment and additional landmark input can help prevent location error accumulation as well as localization fails. And the demand can be handled with parallel processing and multiple GPUs running together.

SLAM faces challenges like any other technology, but as its popularity continues to grow these problems become easier to deal with and far less prevalent.

Dioram SLAM

Dioram SLAM One is the main product of our company. Since 2015 we decided to pursue a dream of a complete end-to-end neural network visual SLAM. Which would dramatically lower the costs of sensors and increase the accuracy and robustness of localization. This is a long road to go, but at the moment Dioram SLAM One is state of art localization and mapping tech, which works just with vision cameras + IMU without a need for expensive lidars.

We customize core tech behind Dioram SLAM One and adapt it to particular needs of a customer — would it be an outdoors dense mapping solution for urban space, a cheaper alternative for a delivery mobile robot localization or professional tools for sensors auto-calibration for autonomous cars manufacturer.