Why Self-Driving Cars Face Unfair Blame in Accidents

As automakers aim to broaden the market for autonomous vehicles (AVs), they’re encountering a psychological obstacle: people assign greater blame to AVs for crashes—even when the AVs are not responsible. Experiments involving more than 5,000 participants across three studies reveal a consistent pattern: people imagine how events would have unfolded if an impeccable human driver, rather than artificial intelligence (AI), had been in control. This “what if” reaction could apply to AI in other domains too, says Harvard Business School Assistant Professor Julian De Freitas, posing a hurdle to wider acceptance and rollout of the technology.

“It’s different than when regular systems fail, because AVs are still this unfamiliar and unsettling technology,” says De Freitas, a co-author of the study and director of the Ethical Intelligence Lab at HBS. “And so, when there is an AI failure people fixate on the technology and imagine how things could have turned out differently had a superior driver been present instead. This makes people more likely to view AV makers as liable, raising their legal and financial risk.”

Successful lawsuits and settlements against AV manufacturers can be extremely costly to companies. They can also push up insurance premiums, or even affect whether insurers will cover them at all. The promise of AVs—now gradually appearing in cities across the United States—is at risk, prompting firms to consider how to manage liability and persuade wary consumers. De Freitas examines this tension in the article “Public Perception and Autonomous Vehicle Liability,” published in the Journal of Consumer Psychology in January. He collaborated with Xilin Zhou and Margherita Atzei of Swiss Reinsurance, Shoshana Boardman of the University of Oxford, and Luigi Di Lillo from the Massachusetts Institute of Technology.

The Cruise case: a cautionary tale

In 2023, a human-driven car struck a pedestrian who was jaywalking in San Francisco. The pedestrian was propelled into the path of an autonomous Chevy Bolt operated by Cruise (a General Motors subsidiary). The AV braked but still struck the woman. After the collision, the vehicle attempted to stop but dragged her roughly 20 feet before halting. She survived but sustained serious injuries. Although a human driver could not have averted the crash, Cruise was fined $500,000, lost its San Francisco permit, and ultimately shut down its robotaxi operations—largely because it failed to fully disclose the vehicle’s role in the dragging episode.

Held to an impossible standard

Yet, did the public fault Cruise more than it should have? To probe how consumers judge an AV’s responsibility in crashes like the Cruise episode where the AV isn’t to blame, the researchers asked people to assess liability across three separate studies in which one vehicle caused the accident and the other did not (without showing participants the dragging incident).

Study 1: Who gets blamed when not at fault?

Participants were more inclined to back suing the maker of the not-at-fault vehicle if it was an AV (38 on a 100-point scale) versus human-driven (31).

Study 2: The “what if” effect

When presented with a hypothetical scenario, participants mentioned the not-at-fault vehicle more often when it was an AV (43%) than when it was human-driven (14%).

Study 3: Refocusing blame on the real culprit

When reminded of the at-fault driver’s traffic offense, the disproportionate blame placed on the not-at-fault AV vanished (12 out of 100 for the AV maker versus 11 for the human-driven carmaker).

Participants tended to see the manufacturer of the not-at-fault vehicle as more culpable when the vehicle was autonomous than when it was human-driven, even though in both cases the collision was unavoidable. Why? People lock onto the AV itself as an “abnormal” element, making them more apt to envision alternative outcomes in which another driver was present—but not just any driver, a “perfect” one. When researchers asked participants to finish sentences about how things might have turned out differently, responses included lines like “If only there were a person in the car, they would have been able to swerve to avoid being collided into”—even though the scenarios were constructed so that such split-second maneuvers were impossible. De Freitas explains: “In effect, these vehicles end up being held to a higher standard than human drivers, because people compare them to an imagined ideal that surpasses what a human is capable of in the same situation.”

Managing risks for AV companies

If the public wrongly blames AV firms when they are not responsible, this could raise insurance costs for AV providers, the study suggests. Some companies may face a choice between passing higher insurance expenses to customers or cutting back their liability coverage, leaving them vulnerable to enormous exposure in the event of technological failure. “As we saw with the Cruise incident, that would be unwise, because even when it is not at fault, an accident could be potentially devastating for a company’s survival,” De Freitas says.

In fact, the risks De Freitas and colleagues identified might be amplified in reality for at least three reasons:

  • Liability in some U.S. states is uncapped, so juries may award damages that exceed actual costs if AVs are unfairly blamed.
  • Even if found only partially liable, AV companies are perceived as wealthier and insured, making them attractive litigation targets.
  • In certain states, laws may require AV providers to pay full damages when other parties in the crash lack adequate insurance.

How to build trust

One positive takeaway from the research: the more people trusted AVs, the less likely they were to deem them liable. For those individuals, AVs may appear less abnormal, reducing the tendency to imagine idealized alternatives. A subsequent experiment showed that emphasizing the at-fault driver’s culpability reduced the inclination to blame the AI. By diverting focus away from the novelty of the vehicle, people became less prone to exaggerating hypothetical hazards.

De Freitas says automakers and other firms deploying AI should:

  • Anticipate tech failures. “Recognize that these systems can malfunction, and when they do, you may face more risk than usual.”
  • Earn consumers’ trust so the technology appears less strange and alarming. “Explain what you’re doing, why it makes sense, why it’s reasonable, and what you’re continually doing to improve and be transparent, especially regarding safety.” AI-based systems must “feel more familiar, like a common part of our surroundings.”
  • Rule out counterfactuals. The public will speculate about what could have happened, even under unlikely scenarios. Companies and their legal teams will need to counter such narratives with facts, clarifying what AVs can—and cannot—do, and how that constrains possible counterfactuals.
  • Steer attention away from the technology. Shift public focus away from the tech itself and toward the genuine sources of fault. Still, redirecting attention is not the same as hiding facts—companies must stay transparent about AI’s role in any incidents.

See also: The Road Ahead for Autonomous Vehicles in New York City

Be the first to comment

Leave a Reply

Your email address will not be published.


*