When it comes to the world of autonomous vehicles and technology in general few names are as important as Tesla and Elon Musk. The American electric vehicle company is one of the most important names in the world of autonomous cars and one of the strongest supporters for a future where all cars are automated.
Tesla’s venture into the world of automated driving started as early as 2013 when CEO Elon Musk discussed the practical applications of autopilot systems and how these could be integrated into cars. As Elon Musk put it, autopilot is a great feature to have in airplanes, and there’s no reason why it shouldn’t be available in cars.
Since then, Tesla has made large advances in the world of automated vehicles, and as a result, all Tesla vehicles released after 2014 have had autopilot software or at the very least hardware compatibility for these systems. Tesla autonomous driving is synonymous with the brand itself, and the company has constantly pushed and made efforts to make sure these vehicles have a larger place in the general market with each passing year.
Modern Tesla cars can autosteer and maintain cruise control in a lane at all times, auto park, maneuver out of tight spaces and navigate on autopilot under certain circumstances. All of this is achieved through a combination of Tesla’s Autopilot and Full Self-Driving technologies, but despite all of this the future of Tesla autonomous vehicles still has some finetuning to do. And this is because they are not completely autonomous.
While Tesla’s automatic features are nothing short of impressive and some of the most important in the market there’s still a lot of work ahead for Tesla’s cars to achieve full autonomy and for Tesla driverless cars to become a reality. However, to explain why these vehicles aren’t 100% autonomous and what is required of them to reach this level, we need to cover what Tesla is currently offering to the market.
Autopilot is the name of Tesla’s suite of advanced driver assistance, which includes most of the “autonomous” features that the brand is known for amongst its clients and the general populace. Autopilot works through a combination of complex camera and sensor technology alongside cutting-edge software that allows the vehicle to make informed movement choices in real-time without needing direct user input
On the hardware side, most of the recent Tesla vehicles come equipped with a total of 8 external cameras, 12 separate ultrasonic sensors, and an onboard computer designed to analyze and interpret all of this raw data. Alternatively, Model 3 and Model Y vehicles have transitioned into the use of Tesla Vision which relies solely on cameras, but the general behavior of autopilot remains the same.
Tesla’s Autopilot is described as advanced driver assistance and not as autonomous control because according to company policies the system requires constant monitoring by the driver, and while the process can be described as hands-off; the driver is still expected to take the wheel if any complicated situation arises.
So, what does exactly Autopilot offer? Autopilot mainly offers two key features known as Traffic-Aware Cruise Control and Autosteer. The former is designed to allow a vehicle to keep a safe distance compared to other cars on its lane and any other objects or individuals that might come across its path. In this mode, a Tesla car can control its speed and react to external factors in real-time. Autosteer on the other hand uses cameras to detect the marked lines on a road and ensure the vehicle does not leave its lane at any point of the Autopilot experience.
Together both of these functions add a lot of convenience to regular driving, but they don’t fully replace human driving nor are they able to handle all the complexities of a road on their own. More advanced features are instead found on Tesla Full Self Driving. This new software is currently in Beta and not present on all vehicles of the company. While Full Self-Driving adds auto parking and auto navigation the technology is far from complete, and this process will be essential to ensure full autonomy on later models.
While Tesla’s advances on the self-driving car are nothing short of notorious it is important to note that in its current shape Tesla does not offer true autonomous vehicles to its clients. Both Autopilot and the current version of Full Self-Driving are considered advanced driver-assistance systems; so, they are first and foremost an aid for human drivers and not a replacement for them.
Tesla has stressed time and time again the importance of an attentive driver when using these technologies, and active driver supervision is expected at all moments when using these features. In the eyes of Tesla and in fact, most ranking systems, while these are reliable aids, they do not in any form shape or way replace human drivers. However, to better understand why Tesla autonomous vehicles aren’t quite a reality yet we should take a look at the automation driving levels and understand how current Tesla cars fall in this scale.
The Society of Automobile Engineers (SAE) has classified the different levels of car automation into six categories which indicate just how autonomous a given car model is. In this context, a Level 0 vehicle can issue warnings and momentarily intervene in a function as the brakes but it would completely lack any form of sustained vehicle control. On the other hand, a Level 5 vehicle would be a perfect autonomous vehicle that can drive under any circumstances without any sort of human input.
Using SAE’s scale Tesla’s current line of vehicles would be classified as Level 2 self-driving cars or “hands-off” cars. At Level 2 the car can take full control of the speed and steering, but the system isn’t designed to react to any potential threat or complication. In other words, for complex matters, the human driver is still expected to react in real-time and as such, the driver must be paying attention at all times. The “hands-off” moniker is not meant to be taken literally in fact, and some vehicles actually request for the driver to keep their hands on the wheel at all times.
For comparison’s sake, a Waymo self-driving taxi that operates completely without human drivers is only a Level 4 on the SAE scale. Right now, most car manufacturers can’t ensure perfect automation on all weather conditions, and as such completely autonomous vehicles are still years away.
While we briefly touched on the topic of Tesla Vision above, this new technology will be a major factor in the development of true Tesla autonomous vehicles and as such it’s important to take a deeper look at it.
Traditionally self-driving vehicles rely on a compound of sensor technologies to be able to analyze the world around them and react in real-time to any changes in their environment. The 3 main types of sensors employed in self-driving cars are cameras designed to provide detailed images, sonars that allow detecting distances, and LiDAR which allows a car to simultaneously map and traverse its surroundings through the use of lasers.
Tesla, however, has largely opposed the use of LiDAR on its vehicles, believing that mapping every possible destination and location is too much work to be feasible or to provide accurate results on its first run. Tesla instead has focused its technology to rely solely on cameras and sensors, with an array of 8 external cameras and 12 ultrasonic sensors as the default configuration of their vehicles. However, in recent times Tesla has decided to do away with this setup and they plan to continue developing their self-driving features through the use of Tesla Vision instead.
Tesla Vision is a completely camera-based autopilot system that removes the ultrasonic sensors and instead focuses on ensuring that the car can make full of use of the visual information that is being received by its multiple cameras. The main advantage to this completely camera-based system is that it won’t require any sort of predefined information on the roads the vehicle will be navigating. This fully visual system will be able to obtain and interpret all the relevant information of its surroundings in real-time and react based solely on the visual feedback provided by its cameras.
As of right now only Model 3 and Model Y Tesla cars sold on the North American market have made the switch to Tesla Vision. However, they’ve done so without impacting the safety ratings for said vehicles; likewise, all safety features remain fully functional despite the technology change.
One undeniable reality of the switch to Tesla Vision is the sheer amount of data the Tesla team will have access to, as Tesla currently counts with millions of camera-equipped cars across the world, and the new models with Tesla Vision will offer even more detailed information to their parent company. This volume of information is ideal to train the deep learning model these cars employ, and the early transition of Model 3 and Y vehicles is planned to assist with this information compiling and improve on the Tesla Vision suite. That said the fact remains that the volume of data could result in a labeling complication.
Data labeling refers to the process in which raw data is identified and assigned labels to help computer systems interpret this information and learn from it. This is a vital step in the world of self-driving vehicles, but the sheer volume of data the Tesla team will be looking at is on a completely different scale from most labeling projects.
To deal with this the Tesla team opted for an auto-labeling technique that simultaneously relies on radar data, neural networks, and human reviews. Under this system, the data of each vehicle is being annotated offline, which allows the neural networks to run the videos over and over again to analyze the situation and compare their predictions with the reality in front of them.
Coupled with the above the team at Tesla took full advantage of the offline labeling to use stronger detection networks that wouldn’t be able to fit on a car’s onboard computer. Since the data doesn’t need to be labeled in real-time anymore more intensive processes can be run on it, which allows for improved detection of potential triggers and further adjusting of the system.
While there’s no denying that this auto-labeling system is intensive on the staff, the smart division of labor and advanced neural networks have allowed to make the most out of the bulk of information. And each new piece of data is another opportunity to finetune the future of Tesla autonomous driving.
As we just saw the neural network involved in dataset labeling plays a huge role in the efficiency and further development of Tesla Vision and the company’s self-driving features moving on forward. And this meant Tesla had the challenge to make the most efficient neural network possible to analyze and make the most of their abundant dataset.
Tesla’s solution was the creation of hierarchical deep learning architecture. This system is composed of multiple simultaneous neural networks which can process information and then feed their resulting output into further networks based on the needs of the system.
The first step begins with the eight cameras in a Tesla vehicle. All of these cameras are constantly receiving and broadcasting information, but to truly start learning a neural network must extract the information from these cameras and then fuse them across time. This provides the system with a timeline of events that can be used to predict trajectories and smooth out other potential interferences. The spatial and temporal features of this output are then delivered to branching neural networks to continue the learning process.
This system was designed to deal with the number of outputs that have potentially interesting and valuable data. Thanks to the volume of data, it’s not feasible to a single neural network for each output. This hierarchical system on the other hand allows Tesla to reuse components in the system for multiple tasks and then share all these potential features between different pathways on the network.
The network also offers considerable advantages when it comes to the development of the system itself. Since the learning architecture consists of multiple neural networks it’s possible to make independent teams of engineers work on separate networks inside the architecture at the same time. This allows each component of the network to be worked simultaneously and then these results get added to the larger network as they are completed. Right now, Tesla counts with over 20 engineers working on different aspects of the system, but at the end of the day, all of them are providing tangible results to a single neural network.
A major advantage Tesla has when it comes to the development of its self-driving cars is its vertical integration of the production process. Tesla owns all aspects of its self-driving car stack, and this provides a unique position for the company when it comes to the development and fine-tuning of its products.
The AI Chips installed on Tesla cars are all owned and built by the company itself, similarly, their compute cluster employed in deep learning is a custom construct, and they are fully responsible for every step of the manufacturing of their cars. This means that Tesla has full control over every single aspect of the construction process and this allows them to make sure every single piece, software, and servo is 100% designed to work seamlessly together and provide superior performance.
Every layer of Tesla’s self-driving car stack is completely co-designed and co-engineered to ensure that all aspects of the finalized model work perfectly with each other. Unlike other manufacturers, Tesla isn’t limited by third-party providers and this allows the company to handle every single detail of their cars, something that isn’t possible for most manufacturers.
While Tesla is a major player in the world of self-driving vehicles its product lineup is still far from being considered true autonomous vehicles. Their Autopilot and Full Self-Drive system provide various practical features that help reduce driver intervention, but at the end of the day, these systems are just meant to be driver assistance, and can’t still replace a human driver.
Nonetheless, Tesla continues to push boundaries when it comes to building a future with autonomous vehicles, and all of its newest developments are paving the road for truly autonomous vehicles. Tesla’s switch to Tesla Vision is one of their most promising announcements in recent times, by dropping LiDAR and radars altogether Tesla is striving to build a system that can adapt to any road and any conditions on its first travel relying solely on visual information.
Tesla Vision is then supported by an advanced auto-labeling dataset and complex deep learning architecture that allows the company to make the most out of all this information and keep improving the AI system with every single piece of data they receive. Autonomous vehicles might not be fully realized yet, but with advances like Tesla vision and a fully integrated production process, they might be closer than we realized.