Is Tesla wrong to blame Model X driver for his own death?

On March 23, 38-year-old Walter Huang was killed in a one-car collision when his vehicle hit a highway safety barrier in Mountain View, California.

Ordinarily, that wouldn’t be front-page news. The sad fact is that nearly 100 people die every day on U.S. roads, including drivers, passengers, cyclists, and pedestrians. Families of the deceased can use accident lawyers to fight for the deceased and even claim some compensation for their loss, especially if their loss as resulted in a loss of earnings. A funeral can also cost a small fortune so a lawsuit could also help a family pay for those arrangements.

What makes this accident different is that Huang was driving a Tesla Model X with Autopilot engaged.

The problem with Autopilot

Since Autopilot software debuted in October 2014, Tesla has walked a fine line in marketing it. On the one hand, the company has played up Autopilot’s gee-whiz, high-tech self-driving features. On the other, it’s had to caution drivers that the system isn’t fully autonomous and requires them to keep their hands on the steering wheel and their eyes on the road–something that some Autopilot users have seemed reluctant to do.

Critics of the system grew much more vocal after learning of the first fatality officially linked to Autopilot in May 2016. (A previous death was recorded in China in January 2016, but the assessment of that crash hasn’t wrapped up.) Criticism has ramped up again in the wake of Huang’s death, and once again, Tesla is pushing back, deflecting blame away from Autopilot.

Whose fault is it?

Shortly after Huang’s death began making headlines, Tesla issued a statement blaming the state of California for failing to maintain its highways, which inhibits Autopilots ability to recognize lane markings and such:

The reason this crash was so severe is because the crash attenuator, a highway safety barrier which is designed to reduce the impact into a concrete lane divider, had been crushed in a prior accident without being replaced. We have never seen this level of damage to a Model X in any other crash.

That’s not a very compelling argument to me. Tesla knows good and well that America has a massive infrastructure problem affecting the quality of our roads and bridges. The situation isn’t changing anytime soon, so Tesla’s software engineers should’ve taken it into account. If Autopilot can only recognize hazards on well-maintained highways, that’s Autopilot’s fault.

In that same statement, however, Tesla pointed out that Huang himself might’ve avoided the collision:

In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged with the adaptive cruise control follow-distance set to minimum. The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.

Blaming the victim is rarely a good defense, which explains why Tesla is now seeing a fair bit of backlash.

On the other hand, Tesla has gone to increasing lengths to remind drivers that Autopilot is a “driver assist” system, not fully autonomous driving software. Is it reasonable to believe that Huang understood that caveat? Probably so.

All of which leads to this big question: if Tesla’s account of the crash is accurate, whose fault is it? When people misuse technology even after being told of its limitations, who’s to blame?

Comments are closed.