A Tesla Model 3 sedan that crashed into a truck on a Florida highway in March, killing its driver, had its Autopilot semi-autonomous feature engaged, according to a new report from the National Transportation Safety Board. The driver is at least the fourth person to die in an Autopilot-related crash. What’s striking about the March 1 crash is that the details are nearly identical to those of the first publicly reported, deadly Autopilot crash, in May 2016. In each case, a Tesla running Autopilot on a Florida highway struck a truck cutting across its path, killing the Tesla’s driver.
Alex Davies covers autonomous vehicles and other transportation machines for WIRED.
CEO Elon Musk has bragged about his cars’ self-driving capabilities and promised they’ll be fully autonomous starting next year. But critics say the Autopilot system—which requires that drivers remain attentive and ready to take control of the car—doesn’t do enough to ensure that drivers pay attention, and that Tesla makes the system seem more capable than it is. The new crash casts doubt on how well Tesla has responded to those critiques, even as it moves to offer more ambitious technology.
In the March crash, according to the NTSB’s preliminary report, the red Tesla Model 3 was driving south in the right lane of State Highway 441 in Delray Beach, about 50 miles north of Miami. The truck pulled out of a private driveway on the right side of the road, heading across the highway and intending to turn left, going north. The truck slowed as it crossed the southbound lanes, the report says, “blocking the Tesla’s path.”
The Model 3’s Autopilot system had been turned on about 10 seconds before the crash, and the car didn’t detect the driver’s hands on the steering wheel for the eight seconds immediately preceding the impact, the NTSB report says, which a Tesla spokesperson confirms. The car struck the trailer at 68 mph (the speed limit is 55 mph) without making any evasive maneuvers. It passed under the trailer, ripping off its roof and killing the driver, identified as Jeremy Beren Banner, age 50. (The truck driver was not injured.) The Tesla came to a stop on the highway’s earthen median, about 1,600 feet away.
The report does not say how far away the Tesla was when the truck pulled onto the road, nor does it note any weather conditions that would have affected the car’s braking capability, so it’s unclear if a driver not using Autopilot would have been able to stop safely. But some rough math says that a Model 3 driver would have needed a few seconds’ notice to avoid a crash: At 68 mph, the car was covering 100 feet a second. A Model 3 going 60 mph needs 133 feet to stop. If that ratio holds, the Tesla in question could have stopped within 151 feet. Add in 1.5 seconds for the driver to register the truck and move his right foot to the brake pedal, and it looks like three seconds would be enough.
On May 7, 2016, 40-year-old Josh Brown died in very similar circumstances. His Model S had Autopilot engaged as it drove northbound on Highway 27A in northern Florida. A truck coming from the southbound lanes turned left across his path, headed for a local road. The Tesla did not slow before hitting the truck at 74 mph (on a 65 mph road), going under the trailer, ripping off its roof, and killing Brown. The car went another 297 feet, hit a utility pole hard enough to break it, and stopped 50 feet later. (The truck driver was not injured.)
When the NTSB issued its final report on the 2016 crash, it noted the truck driver should have yielded to the Tesla and that Brown was inattentive. But it also put some of the blame on Tesla for designing a system that allowed the driver to overly rely on the automation system for a prolonged period.
Since Brown’s death, Tesla has reduced the time the driver can go without touching the wheel before the system issues audio and visual warnings. It has implemented a new hardware design and gone through several iterations of its software. But the circumstances of the crash indicate that while driving at highway speeds, the system remains incapable of detecting some stationary objects, or those moving perpendicular to the car. Similar systems offered by Volvo and Nissan have the same shortcoming. That’s because radar is typically used to look for moving things, to filter out false positives like highway signs and overpasses, says Matt Johnson-Roberson, who codirects the University of Michigan Ford Center for Autonomous Vehicles. This is likely the same reason at least three Teslas crashed into stopped fire trucks in 2018 (with no serious injuries).
The WIRED Guide to Self-Driving Cars
The Model 3 involved in the latest crash was also equipped with cameras, which should, theoretically, be able to detect a truck crossing its path, Johnson-Roberson says. Tesla did not respond to WIRED’s questions about how the system uses its radar and cameras or what steps it has taken to avoid this type of crash.
At the same time, Tesla is loudly bullish on the potential of camera-based computer vision to enable what Musk calls “full self-driving,” needing no human supervision. Just last month, Tesla Autopilot vision chief Andrej Karpathy said Tesla believes it can use machine learning techniques and cameras to make its cars top-notch drivers. Musk went further, saying, “I feel very confident predicting autonomous robotaxis for Tesla next year.”
In response to the latest crash, a Tesla spokesperson shared a statement saying its data shows that when the driver is attentive and ready to take control back from the car, “drivers supported by Autopilot are safer than those operating without assistance.” But where Cadillac and Audi use eye-tracking systems to check that drivers remain attentive, Tesla relies on the comparatively unsophisticated method of checking when the driver touches the wheel—even as it claims it’s delivering the future.