A recent crash involving Tesla's Full Self-Driving (FSD) system has opened the controversy once again about the safety of semi-autonomous vehicles.

Raffi Krikorian, former head of Uber's self-driving division, recounted the incident after his Model X collided while his children were in the back seat. Despite following Tesla's guidelines and keeping his hands on the wheel, the crash exposed deeper concerns about system design and driver reliance.

The 'Supervision Trap' Explained

Tesla
Learn why Tesla discontinued Autopilot, how region restrictions work, and what EV regulations mean for your next vehicle purchase.

Based on a report by Electrek, Krikorian points to what experts call the "supervision trap." Tesla's FSD often performs near flawlessly, encouraging drivers to relax attention. Yet, the system still requires human oversight during critical moments. This near-perfect automation can lull users into a false sense of security, making it harder to react when emergencies arise.

Unlike less reliable systems that demand constant vigilance, FSD's smooth operation paradoxically increases risk by reducing human engagement at key moments.

Human Reaction Time vs. System Expectations

Research underscores the danger. Drivers generally take five to eight seconds to regain focus in emergencies, but real-world incidents unfold much faster.

According to Boingboing, the Insurance Institute for Highway Safety found that prolonged use of driver-assist systems significantly increases driver distraction, with users more likely to check phones or divert attention.

Past accidents reinforce this pattern: in one fatal Tesla crash, the driver had several seconds of warning yet failed to respond, while a safety driver in an Uber test vehicle had only a brief moment before impact.

Based on Krikorian's observation, the cases highlight the mismatch between human reaction capabilities and system demands.

Transparency and Accountability Concerns

Tesla's extensive data collection has faced scrutiny for limited access during investigations. In contrast, automakers like BYD have adopted more transparent policies, even offering coverage for damages caused by certain autonomous features.

This is because the Chinese government has stringent marketing regulations, according to a comment on r/SelfDrivingCars subreddit.

"If you do L2 stuff, you MUST be advertised as L2 software, not some vague "self-driving", "autonomous driving" jargon. That is why even Tesla FSD is branded as "Intelligent Driving Assistance" in China," the Reddit user added.

Aside from BYD, other Chinese brands offering similar FSD-like capabilities include XPeng (XPILOT), Aito (Huawei Advanced Driving System), Xiaomi (Navigation and Parking Suite), and Li Auto (ADiGO 3.0).

Originally published on Tech Times