Welcome To Your Autonomous Life: Self Driving Cars Are A New Reality

Autonomous vehicles have become a hot topic among professionals in both automotive and technology fields. Recent headlines have consumers dreaming about their own personal robot chauffeur; but how close are we really? We will undoubtedly be able to sit behind the wheel (if the steering wheel will be there at all), enter a destination into the GPS, and take a nap on the way. But will this technology arrive next year, next decade, or next century? There are a number of technologies influencing the development of self-driving cars, and all of them need to advance before we should feel secure about passing the keys to an algorithm. The good news is that they are advancing…very rapidly.

It is easy to paint a picture of cars driving themselves; but what exactly is the definition of an autonomous car, and where is the line between a car that assists you and one that drives for you? Called different names by different automotive manufacturers, the suite of advanced driver assistance features (ADAS) has grown dramatically in recent years and undoubtedly prevented on road collisions.

However, features that provide more information to the driver are completely different from cars that do all the driving themselves. To help clarify this, the Society of Automotive Engineers (SAE) has published technical standard J3016_201609 that distinguishes between 6 possible levels of driving automation for cars.

The SAE standard uses many terms to clarify what components of the Dynamic Driving Task (DDT) can be performed by the car itself; the DDT just covers all aspects of driving a car on a public road. SAE makes it very clear that within the standard, “the term ‘driverless vehicle’ is not used herein because it has been, and continues to be, widely misused to refer to any vehicle equipped with a driving automation system, even if that system is not capable of always performing the entire DDT and thus involves a (human) driver for part of a given trip.” The first clarification that is made is the difference between a driving automation system and an Automated Driving System (ADS). According to SAE (and the U.S. Department of Transportation, which has adopted the SAE levels), a truly autonomous car is called an ADS-Dedicated Vehicle (ADS-DV) and is “designed to be operated exclusively by a level 4 or 5 ADS for all trips.” A driving automation system is simply anything that automates some part of driving operation; these have been commercially available for many years. The other definitions you will need in order to decode the table are: OEDR (Object and Event Detection and Response), ODD (Operational Design Domain), and fallback. OEDR is the act of monitoring everything around the car to make sure the drive is going well and safely, and intervening in case something looks wrong. ODD defines a specific list of conditions that limits where the ADS can operate safely. Finally, fallback is the term used for taking over when the ADS identifies a problem and determines it cannot complete its functions.

The prospect of truly autonomous vehicles has many potential upsides. For one, it could save drivers thousands of hours of time every year; according to the U.S. Census bureau the average American travels for 26.4 minutes a day to get to work. If commuters no longer need to focus on the road, they can spend that time reading, eating, sleeping, or putting on makeup; yes some people already do all those things while driving but it would be preferable if it was safe to do so. In addition, the increased efficiency of computer control could eliminate traffic, reduce accidents, alleviate overcrowded parking lots, and improve vehicle emissions. These lofty goals, as well as the novelty associated with a robotic car, have spurred the popularity of research into self-driving vehicles.

History

The self-driving vehicle is such an old concept that it is hard to pinpoint where it originated. Depending on your definition of a vehicle, you can even trace the idea back thousands of years to flying carpets and horses that have memorized directions. Although not truly autonomous, the first autopilot systems to help stabilize and control the direction of aircraft were developed in 1912 (less than a decade after the Wright brothers). As personal automobiles became more popular, companies began looking into driver-less technology in the 1920s, with the first tests of a radio controlled car in 1925. By the mid 1950s, companies such as GM were successfully testing full scale “smart highways” with embedded sensors allowing cars to drive a predetermined path on their own. So what has happened in the last 60 years? Where are our self-driving (and flying, for that matter) cars?

The technology needed to drive cars on a predetermined path has been established for decades; the only requirement is that the vehicles and road have a matching interface (think of Autopia at Disneyland, except electronic sensors instead of a steel rail). The problem is that this infrastructure is expensive and no one wants to pay for it; it is already challenging enough for the government to upkeep our existing roads. In addition to special roads, a system of this design requires that all vehicles using it must be compatible with the control system; there is no way to incorporate old vehicles and no way for car companies to have proprietary technology. The last hoorah for smart highways was the Intermodal Surface Transportation Efficiency Act passed by Congress in 1991, which set aside $650 million to develop self-driving cars. Obviously this money did not create a mass market autonomous car; but the project had some success and helped fund research which is now being applied on production cars.

As technology has advanced, automakers have turned away from utilizing a connected infrastructure and see that it is now possible to have an autonomous vehicle that operates independently. This path is beneficial from both a government and commercial perspective since it doesn’t require public construction of new infrastructure and allows companies to market their own proprietary system as a competitive advantage. The autonomous car as we know it today can trace its roots back to adaptive cruise control (ACC) systems, which first became available on production cars in the 1990s. From the days of the first DARPA challenge in 2004, when not a single competitor finished the course (often in comedic fashion), to today, autonomous technology has advanced by leaps and bounds.

Current ADS Features

The most common driver assistance feature, ACC utilizes one or more radar or laser based distance sensors which can track the distance between the front of the car and any object (typically another vehicle). The car uses this distance information to adjust the set speed of the cruise control; so if cruise control is active and the vehicle in front of your car is going slower than yours, your car will match the speed of that vehicle until it moves out of the way. ACC controls the vehicle’s movement in one dimension (longitudinal), and therefore qualifies as a level 1 ADS. Typically, these systems only function within certain speed boundaries and weather conditions; these would be the ODD set by the manufacturer. ACC is available as an option on the majority of new cars on the market today, and is often combined with collision warning systems which notify the driver or apply the brakes if the system detects a quickly approaching object. Older systems are only able to identify one object at a time, which causes them to occasionally misidentify a vehicle in the next lane as directly in front of your car; it can also cause these systems to overreact when another car merges in front of you as they have trouble distinguishing this from a sudden braking of the car directly in front of you. Newer systems can track multiple vehicles in adjacent lanes to prevent unnecessary braking or warnings.

Another common form of level 1 ADS is a lane keeping assistance system. Known by many names depending on the manufacturer, these systems utilize sensors to detect the lane markings on the road. Most often they use a camera operating in the visual spectrum, but they can also use infrared or laser sensors. Since lane markers are designed to be seen by a human driver, it makes sense to use a video camera to detect them. Once the vehicle is able to detect the lane markings, it can control the power steering system to keep the car within the lane (control in the lateral direction). Although these systems have been available on cars for over a decade, the situations where they correctly function are limited. The technology is still advancing to be able to extend the ODD of the system to enable correct identification of lane boundaries under more challenging conditions.

The key component for lane keeping assistance is the software used to detect the edges of the lane. Depending on the weather and condition of the road, it can be very challenging for the vehicle (and also the driver) to correctly identify lane markers. Regardless of what wavelengths of light they utilize, the images from the vehicle’s camera are processed through some filters to extract the regions and features of highest importance. From there, they are passed through a classification algorithm that determines where both edges of the lane are located; the algorithm will also return a measure of confidence in its identification. Compared to the well proved radar technology used in ACC, the algorithm utilized by a lane keeping assistant is relatively new and unreliable. These image evaluations are done many times a second; if the classification algorithm is not sure (some confidence threshold set by the manufacturer) of its identification for more than a few evaluations, the system typically alerts the driver and stops steering the vehicle so that the ADS system needs to fallback to driver control.

When a vehicle is equipped with both systems, ACC and lane keeping assistance, it is able to drive itself well on most highways in good weather conditions. This qualifies as a level 2 ADS: where the driver still needs to be alert and watching the road, but can drive for long periods without controlling the car. As long as they are satisfied to not change lanes or exit the highway, the driver simply needs to be available in case the system falls outside its ODD; the car will provide some alert in this situation. One minor exception to this is Tesla, which allows the car to change lanes itself as long as the driver hits the turn signal. These systems have an added benefit of encouraging drivers to use their turn signals; if the driver tries to change lanes without signaling while the lane keeping assistant is active, the system typically fights them to remain in the lane.

For several years cars have come equipped with technology to autonomously park themselves under certain conditions. While very impressive to watch, parking (even parallel parking) is fairly simple to perform and is at such a low speed that the car can always stop if it detects an issue. Tesla even offers the option to “summon” your car to remove itself from a parking spot; useful to anyone prone to tight parking spaces. These systems utilize short range distance sensors embedded in the car’s bumpers, as well as some optical sensors to identify parking spaces and obstacles. These sensors were originally designed to simply provide more information to the driver, and have now been re-purposed for fully autonomous features. The same can be said of many automated features; for example sensors that allow vehicles to change lanes on their own were originally used just to alert drivers of an object in their blind spot.

There are many cars currently available with level 2 ADS, especially in the luxury market. Advertised with high tech names like Autopilot; current systems are simply unsafe to operate without an alert driver, as has been shown by a handful of accidents. Some manufacturers have opted to include routine reminders to the driver if they have not detected any movement of the pedals or steering wheel; Cadillac even includes sensors to track the driver’s eyes to ensure they continue watching the road (and annoy them if they look away). Recently, Audi heralded their new 2018 A8 as the first production vehicle to offer a level 3 ADS with its “Traffic Jam Pilot”. While it sounds impressive, the system only operates correctly on a divided highway at 37.3 mph or less; these would be the very limited bounds of the ODD. If approved by regulators as a true level 3 system, this would allow drivers to legally (and without reminders from the car) do something else while the system is active. Although it will be very convenient to those with a spare $100,000 who spend a lot of time in heavy traffic, the technology behind the system is just a refined version of the level 2 ADS offered in many other cars. As most drivers can tell you, driving in a straight line on a highway is pretty easy; the hard part is navigating busy city intersections with pedestrians, bicyclists, and other vehicles intermittently obeying traffic laws.

The Future of Autonomous Cars

The exciting news is that autonomous driving features are advancing at an incredibly rapid pace. Every major automaker, many large tech companies, automotive suppliers, and a whole host of start-ups are rushing to develop new features for autonomous cars. Some companies are developing fully self contained autonomous cars, while others are only developing some or all of the autonomous features and leaving the car part to automotive manufacturers. There are many examples of recent advances in the field. Waymo (a subsidiary of Google/Alphabet) has driven well over 3 million miles autonomously on public roads, more than all other competitors. Delphi, a large automotive supplier, and Mobileye, a tech company now bought by Intel, demonstrated autonomous driving through traffic in Las Vegas. Test vehicles such as these can successfully drive through many complex situations safely, but they are not yet reliable enough to be sold to consumers. Tesla, known for exaggeration and risk taking, has refrained from selling the technology with the statement “getting a machine learning system to be 99% correct is relatively easy, but getting it to be 99.9999% correct, which is where it ultimately needs to be, is vastly more difficult.”

The next steps for autonomous cars will be to steadily increase the ODD in which vehicles can safely operate. In the near future, cars will be able to merge on and off highways and drive predetermined routes (such as your daily commute) autonomously. While the exact time prediction varies from company to company, the general consensus is that advanced level 3 and level 4 ADS will be publicly available around 2021. These level 4 systems qualify as true autonomous vehicles if they are confined to only driving within their ODD; this could materialize as airport shuttles or long distance trucks that carry goods between cities then hand off to human drivers. Ironically, some trends in autonomous technology are moving towards vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication as originally dreamed in the 50s. Cadillac has recently included geofencing into some autonomous features, so that these features only operate when the vehicle’s GPS confirms that it is on a divided highway; expect this type of handicap to become more popular as more advanced features are rolled out to consumers.

Several U.S. states have issued permits to companies testing autonomous cars on public roads; a key provision of these permits is that there is always a human expert (typically a software engineer) ready to take over control. Although there is very little publicly available data to quantify the progress of autonomous vehicle development, California releases every year the number of miles driven per “disengagement” (whenever the human had to take over control). This number does not take into account what situations the vehicles were being tested under, or why the human intervened, but it does show that real progress is being made.

There are two key aspects behind self-driving cars: correctly reading the road and situation, and knowing how to react to this information. There are some situations where there is no clear answer for how a car should behave; take for example the thought experiment of a car turning a corner (at a legally safe speed) and finding a group of pedestrians illegally crossing the road. If the car cannot stop in time, should it hit the pedestrians or crash itself into a wall, killing the driver who bought it? In most cases, however, knowing how to drive is the easy part for the car; the harder problem is being able to correctly identify all necessary road features and traffic. Developers have installed multiple redundant sensors to help improve the confidence of feature identification; instead of relying on one camera, test vehicles have a whole suite of optical, radar, and laser (LIDAR) sensors aimed in different directions. In order to make these identifications developers are relying on machine learning algorithms, typically neural networks. Known for their ability to pick up minute differences in images to correctly distinguish one group of images from another, these algorithms can be trained very successfully to identify roads, pedestrians, other vehicles, and more. These neural networks, however, do not “think” the same way humans do, and are notorious for being able to correctly identify thousands of images in a row and then classify a tomato as an airplane (or a similarly ridiculous identification). The real limit to commercially available level 5 ADS is how much risk a company can tolerate.

There are some cars being tested today that can drive themselves in all sorts of driving environments; the problem is that they make potentially deadly mistakes. Most people would probably tolerate an autonomous car that accelerates a little too quickly sometimes, but would not be pleased if the car identified a stop sign as a speed limit sign. Given how many possible driving situations exist in the real world, it is impossible to test a car in every situation. The same dilemma exists for software testing, and developers have to set a testing limit where they can be reasonably sure that no more undetected problems exist; this limit changes depending on how damaging a software defect would be. For autonomous cars, this limit must remain very high to prevent loss of life; however, there are no regulations limiting how safe an autonomous car must be before it can be sold, or even how to quantify the safety of the car. In fact, there are currently very few regulations of any kind on autonomous cars, so thresholds for most decisions are being set individually by each manufacturer. This will lead to nonuniform performance from different cars, as well as different launch dates depending on how much a company is willing to risk. Technology companies are typically rewarded for taking risks; and until government regulators step in, the safety of each autonomous car will depend on the company.

New outlets are quick to sensationalize any errors in systems, but the first threshold for acceptance should be self-driving cars that are safer than human drivers; there will undoubtedly be accidents involving autonomous cars, but if this rate is significantly lower than average this is still an improvement. There are a wide variety of priorities for autonomous car developers: correctly identifying street signs and traffic signals, integrating GPS directions, smoothly accelerating and turning to avoid nauseating passengers, and ensuring cars meet all existing safety and environmental laws. All of these steps will take time to fine-tune before we can walk into a showroom and purchase a level 5 car. These cars are coming, however, and will completely revolutionize personal transportation as we know it.