If you’ve been reading this blog or following my tweets, you know that I’m a huge proponent of self-driving cars. In the long run, they will save lives, reduce environmental costs of transportation and make more efficient use of capital. They will fundamentally change the nature of cities and society.
But we’re not there yet. And we won’t be for many years to come.
A Tesla enthusiast died recently when his Model S drove straight into a truck that was making a turn. The car’s “Autopilot” mode didn’t recognize the brightly colored truck against the brightly colored sky. Neither did he. (A portable DVD player was found in his car; it isn’t clear if he was watching it. A witness said that it was playing Harry Potter shortly after the accident.)
We’re in the midst of a long transition period in cars and car safety. I’m afraid this won’t be the last such incident.
We have many different kinds of safety and driver-assistance features in cars today. Some assist driving. Others offer semi-automation. The last category is true autonomous vehicles. (There are no vehicles of the last type in commercial production.)
Definitions of what belongs what will vary. But this is how I think about them.
Driver assistant features
These help the driver with alerts or by managing small parts of the driving experience. They check the work of the driver. They include:
- Anti-lock brakes. The system pulses the brakes to help prevent skidding. Before anti-lock brakes, drivers had to manually pump brakes to keep from hard braking and locking the wheels. With ABS, the system pulses the brakes much faster than a human can. The braking has to be initiated by the driver. These are standard on U.S. cars.
- Back up cameras and back up sensors. When the car is put into reverse, back up sensors will beep as it detects an object behind you. The closer you get to the object, the more frequent the beeping. Cameras show you what’s behind the car, including things you wouldn’t see in the rearview mirror. Cameras are now in about 60% of new vehicles in U.S.; they will be required in cars by 2018.
- Lane-departure warning systems. These notify you when you are drifting out of your lane. They use cameras to look for lane markings. The driver still has to do the steering; the system only alerts to mistakes. LDWS are options on mid- to high-end cars.
- Blindspot detection. When you are changing lanes, blindspot detection systems will alert you when there is a vehicle in your blindspot. This could be an audible alert or an indicator in the side view mirror. BSDs are options on mid- to high-end cars.
These are typically offered on mid- to high-end cars. They actively control the vehicle. They include:
- Cruise control. Cruise allows a driver to set a steady speed for the vehicle and it will maintain the speed. The driver can then remove the foot from the accelerator. Even in light traffic, this is a pretty useless feature. Because other cars change speeds, you have to keep adjusting the cruise setting. This has been a common feature for decades.
- Adaptive cruise control. Similar to cruise control, but the speed adapts to the car in front of you. If the car slows down, your car will slow down.
- Lane management systems. They will keep you in your lane by using cameras to detect lane markings. They’re rarer than LDWS, but rely on the same basic technology.
- Automatic braking. These detect imminent collisions and automatically apply the brakes.
- Automatic parallel parking. These will park your car for you.
These systems use a range of sensors including cameras, infrared and LIDAR along with extensive maps databases to drive without human intervention. Alphabet, the parent of Google, is the company that is furthest along in fully autonomous vehicles.
In Google’s testing, there have been no fatal accidents. The only accident caused by a Google vehicle was a very-low speed collision with no injuries.
A long transition
We are in the midst of a long transition. Unfortunately accidents will happen because of a combination of human laziness, overselling of the product and confusing interfaces. The current semi-automation systems have a lot of limitations.
I recently rented a Cadillac STS with a lot of these features. As I drove it, I tried using the “lane keep assist” feature. In theory, the system would keep me in my lane. I tried it on curvy Interstate 280 in the Bay Area, in moderate traffic. As far as I can tell, the system didn’t work. When I took my hands off the wheel, the car would drift a foot into the other lane before pulling me back into my lane. Although I’m a big fan of testing products to the limit, I wasn’t about to do that in traffic.
It’s possible that it was user error. Or a confusing interface. Or I was outside the limitations of the system.
According to GM, Lane Keep Assist and Lane Departure Warning systems may not:
- Provide an alert or enough steering assist to avoid a lane departure or crash
- Detect lane markings under poor weather or visibility conditions, or if the windshield or headlamps are blocked by dirt, snow, or ice, if they are not in proper condition, or if the sun shines directly into the camera.
- Detect road edges
- Detect lanes on winding or hilly roads
And if Lane Keep Assist only detects lane markings on one side of the road, it will only assist or provide a Lane Departure Warning alert when approaching the lane on the side where it has detected a lane marking.
Lastly, GM says that using Lane Keep Assist while towing a trailer or on slippery roads could cause loss of control of the vehicle and a crash. Turn the system off.
When the LKA or LDW systems don’t work properly, the system performance may be affected by:
- A close vehicle ahead
- Sudden lighting changes, such as when driving through tunnels or direct sunlight on the camera sensor
- Banked roads
- Roads with poor lane markings, such as two-lane roads
That is a lot of limitations to be aware of! It’s too easy to learn to rely on semi-autonomous features that might work 95% of the time but have dire consequences in the 5% case.
Marketing doesn’t help either. The benefits are highlighted in glamorous videos; the limitations buried in fine print. Even naming makes a big difference. Calling something “Autopilot” given the state of today’s technology is vastly overstating the case.
Car companies aren’t the greatest at user-interface design, often using what look like hieroglyphics for controls. In my test of the STS, I thought the car had an automatic braking system based on the icons. I’m glad I didn’t try to test that — because it didn’t. Mine was a somewhat unfair test because if I owned the vehicle, I’d probably know what features I had. But if someone had been borrowing my car, they’d be presented with the same set of challenges.
Driver training on the proper use of new features is key. When I went through driver’s ed, I was taught to pulse the brakes to prevent the wheels from locking up. But with antilock brakes, you are supposed to step hard on the brakes. I was taught to put my hands at 10 and 2 on the steering wheel; with airbags, you want to put them at 5 and 7.
Not only are the controls of new features not intuitive, some companies even fiddle with basic features.
FCA’s redesign of the transmission shifter is mind-bogglingly stupid.
The National Highway Traffic Safety Administration’s investigation into the Monostable gear shifter used by a number of Chrysler, Dodge and Jeep vehicles is turning into a recall; FCA will recall approximately 1.1 million vehicles worldwide to modify the operation of the shifter that has now caused 121 accidents and 41 injuries.
The issue itself is not a fault of engineering but rather design, as the shifter returns to the default center position without giving the driver sufficient feedback as to the selected gear.
As a result, a number of owners have exited their vehicles thinking that they had put the vehicle into Park, while in reality it remained in Drive or Reverse position. The NHTSA has called the operation of the shifter “unintuitive” and had opened an investigation into the issue months ago.
With driver reliance on semi-automation systems, system limitations and confusing user interfaces, we can expect to see more cases like the Tesla accident.
Media frenzy and public irrationality
My big worry is that media hype around the small number of accidents will hurt the development of truly autonomous vehicles that can save a lot of lives.
Even in the current state, semi-automation features like lane management and automatic braking can save lives. IF drivers use them as backups.
But we’ll see endless stories about how dangerous automation is. Anything that is “new” is dangerous. It was worldwide news when a Tesla caught fire. Never mind that gas vehicles catch fire much more frequently.
Imagine if we had 24-7 news networks during the rise of aircraft. In the early years of aviation, lots of accidents happened. Every accident would have been covered nonstop.
With much less media scrutiny than we have today, we were able to improve airliner safety. With every accident, we investigated, learned what went wrong and improved.
The NTSB is great at what it does. Although we primarily hear about it in the context of airline accidents, they’re already looking into the Tesla accident.
They provide reasoned analysis, tradeoffs and recommendations. Unfortunately, government, politicians, media and the public don’t work that way. We will see negative hype around self-driving cars as politicians chase votes and media chase ratings.
When it comes to media, only the misses count. If your technology saves 4,999 people, you don’t get credit for that. But you get dinged for the one it doesn’t save.
Developing our safer future requires some reasonableness on the part of consumers, manufacturers, media, politicians, regulators and attorneys. Is that an unreasonable ask?