When a Level 3 autonomous car kills someone, who faces prosecution? The driver who was allowed to look away, or the manufacturer? The law has no clear answer.
Hyle Editorial·
The legal definition of 'self-driving' is a policy vacuum. If a Level 3 car kills someone, who goes to jail — the driver who was legally allowed to look away, or the company that told them to? In March 2024, a Mercedes-Benz EQS equipped with Drive Pilot — the world's first federally approved Level 3 autonomous system — was involved in a collision on a California highway. The driver had been watching a movie, fully compliant with the system's intended use. Within seconds of the crash, a question emerged that regulators had dodged for a decade: when the machine is in control, who bears the blame?
The answer, disturbingly, depends entirely on which jurisdiction you're driving through. California's DMV requires manufacturers to report all autonomous vehicle disengagements and collisions, yet their framework for criminal liability remains ambiguous. The European Union's AI Act classifies autonomous driving as "high-risk," demanding strict documentation — but stops short of clarifying criminal prosecution. South Korea's Autonomous Vehicle Safety Standards, updated in 2023, permit Level 3 operations on designated highways but remain silent on the transfer of legal culpability from human to algorithm.
The Society of Automotive Engineers' J3016 taxonomy, first published in 2014 and updated multiple times since, defines six levels of driving automation. Levels 0 through 2 require constant human supervision. Levels 4 and 5 represent genuine automation — the vehicle handles everything, and the human is never responsible for monitoring. But Level 3, termed "conditional automation," exists in a legal no-man's-land.
Under Level 3, the car manages all driving tasks within specific operational domains — highway cruising below 40 mph, for instance — while the "fallback-ready user" must be available to resume control when the system issues a request. Mercedes-Benz's Drive Pilot, approved for use in California and Nevada in late 2023, explicitly allows drivers to watch videos, play games, or read on the central display. The driver is legally permitted to disengage from the road.
[!INSIGHT] The fundamental tension: Level 3 assumes a human can transition from complete cognitive disengagement to full situational awareness in roughly 10 seconds. Research from Stanford's Center for Automotive Research shows this transition takes an average of 27 seconds under optimal conditions — and up to 60 seconds when the driver is deeply absorbed in a secondary task.
The regulatory trap becomes apparent when you consider the fallback timeline. If a Level 3 system encounters a situation it cannot handle — construction zone, emergency vehicle, severe weather — it issues a "request to intervene." But if that request comes 8 seconds before a collision, and the human needs 27 seconds to fully reorient, who is responsible for the crash?
The California DMV Framework: Transparency Without Accountability
California remains the only U.S. state with comprehensive autonomous vehicle reporting requirements. As of Q1 2024, 42 companies hold permits for autonomous testing, and the DMV publishes detailed disengagement reports. In 2023 alone, autonomous vehicles in California reported 512 collisions and 4,859 disengagements — moments when human drivers had to seize control.
Yet the DMV's regulatory framework explicitly defers to manufacturers for determining what constitutes "safe operation." When a collision occurs, the agency requires notification within 10 days, but the question of criminal or civil liability is left to courts with no statutory guidance. The result is a patchwork of case law that will take decades to coalesce.
The EU AI Act: Risk Classification Without Liability Clarity
The European Union's Artificial Intelligence Act, which entered into force in August 2024, classifies AI systems into four risk categories. Autonomous driving falls under "high-risk," triggering requirements for:
Risk Management Systems: Manufacturers must document and mitigate foreseeable hazards throughout the vehicle's lifecycle
Data Governance: Training data must be representative, free of errors, and relevant to European driving conditions
Technical Documentation: Detailed records of the AI system's architecture, training methodology, and performance metrics
Human Oversight: Systems must enable effective human monitoring and intervention
“*"The AI Act ensures that autonomous vehicles meet the highest safety standards before they reach European roads. But the question of who pays when something goes wrong”
— that's for member state liability frameworks to determine."
The problem? EU member states have wildly divergent liability frameworks. Germany's 2021 autonomous driving law explicitly places liability on the vehicle keeper during automated mode. France requires human supervision even during automated operation — effectively nullifying Level 3 benefits. Italy has no specific autonomous vehicle legislation at all.
South Korea: Ambitious Deployment, Regulatory Lag
South Korea has aggressively pursued autonomous vehicle deployment, with designated "self-driving zones" in Sejong, Daegu, and Pangyo. The Ministry of Land, Infrastructure and Transport updated autonomous vehicle safety standards in June 2023, permitting Level 3 operations on highways under 60 km/h.
But Korean traffic law still presumes a human driver is always in control. Article 48 of the Road Traffic Act requires drivers to "operate vehicles safely and pay attention to surrounding conditions" — language that predates autonomous systems and offers no exception for Level 3 operation. Criminal liability under Korean law requires establishing negligence, which becomes philosophically incoherent when the human was following manufacturer instructions to disengage.
In 2022, a Kia EV6 equipped with Highway Driving Assist 2 (technically Level 2, but marketed with autonomous-adjacent language) was involved in a fatal collision on the Gyeongbu Expressway. The driver claimed the system failed to detect stopped traffic. Prosecutors charged the driver with criminal negligence, and the case remains ongoing. Legal scholars at Seoul National University have noted that the prosecution's arguments — that the driver should have remained vigilant — directly contradict the manufacturer's marketing, which promised "hands-free highway cruising."
The Industry's Dirty Secret: Regulatory Arbitrage as Strategy
Automakers aren't blind to these contradictions. Several industry insiders, speaking on condition of anonymity, acknowledged that Level 3 represents a calculated regulatory gamble.
“[!NOTE] Mercedes-Benz, BMW, and Honda have all deployed or announced Level 3 systems, while Tesla, Waymo, and Cruise have publicly skipped Level 3 entirely”
— calling it "unsafe by design." Elon Musk stated in 2022 that Tesla would move directly from Level 2 to Level 4+ because "asking a human to be alert but not driving is psychologically impossible."
The manufacturers pursuing Level 3 have structured their liability frameworks carefully. Mercedes-Benz accepts full legal responsibility when Drive Pilot is engaged — but only if the driver responded to the intervention request within the mandated timeframe. The burden of proving timely response falls on... the manufacturer's own internal logging systems. The fox is guarding the henhouse.
This creates a perverse incentive: manufacturers have every reason to log that intervention requests were issued and ignored, even if the human never had a realistic chance to respond. The technical data belongs to the manufacturer, not the driver, and certainly not to crash investigators.
The Impending Litigation Tsunami
The first generation of Level 3 criminal and civil cases is now working through courts globally. In Germany, a 2023 case involving a BMW with Driving Assistant Professional (a high-end Level 2 system that many users mistake for Level 3) established that marketing language can create "reasonable expectations" that affect liability determinations.
In the United States, a class-action lawsuit against Tesla — filed in California's Northern District in 2024 — argues that the company's "Full Self-Driving" branding constitutes fraudulent misrepresentation. The plaintiffs cite internal Tesla communications showing engineers warned that the system could not achieve safe Level 4 operation, yet marketing continued to promise capabilities the technology couldn't deliver.
[!INSIGHT] Insurance markets are already adapting. In 2024, Swiss Re and Munich Re introduced the first autonomous vehicle liability products that cover Level 3 operations — but premiums are 340% higher than traditional policies, reflecting the actuarial uncertainty. Insurers are essentially admitting they cannot accurately price Level 3 risk.
Implications: Why This Matters Beyond the Automotive Industry
The Level 3 liability crisis is a preview of a broader challenge that will confront every industry deploying AI systems that require "human oversight." The assumption that humans can effectively supervise AI — intervening when necessary but otherwise disengaging — is foundational to AI governance frameworks worldwide. It's written into the EU AI Act, the OECD AI Principles, and every major regulatory proposal.
If that assumption proves legally and psychologically flawed for driving — a domain where humans have decades of experience, immediate physical feedback, and clear safety stakes — what hope do we have for AI oversight in domains like medical diagnosis, financial trading, or military targeting?
The autonomous vehicle industry may be the first to confront this reality, but it won't be the last. Every AI deployment that relies on "meaningful human oversight" is building on the same shaky foundation: that humans can be simultaneously disengaged enough to benefit from automation, yet engaged enough to catch the machine's mistakes.
“*"Level 3 is not a stepping stone to full autonomy. It's a cautionary tale about the myth of human oversight. We designed a system that requires humans to be the safety net for AI, when decades of human factors research tells us that humans are terrible at being safety nets.”
— Dr. Missy Cummings, Director of George Mason University's Autonomy and Robotics Center, former U.S. Navy fighter pilot
The Road Ahead
Three potential futures emerge from the Level 3 regulatory trap:
Regulatory Harmonization: International bodies develop consistent liability frameworks that clearly assign responsibility based on automation level, operational domain, and intervention timelines. This is the most optimistic scenario — and the least likely, given the pace of regulatory processes.
Level 3 Abandonment: Manufacturers conclude that Level 3's legal risks outweigh its commercial benefits. Companies skip directly from advanced Level 2 systems to Level 4, accepting that true autonomy is the only way to escape the liability limbo. This appears to be Tesla and Waymo's bet.
Liability Litigation as Governance: Courts become the de facto regulators, establishing liability precedents case by case. This is the current trajectory — and it will take decades, cost billions in legal fees, and produce inconsistent outcomes that leave both manufacturers and consumers uncertain.
Key Takeaway: Level 3 autonomous vehicles expose a fundamental contradiction in AI governance: we want humans to supervise machines, but we cannot legally or psychologically define what that supervision means. Until regulators resolve this — either by mandating true autonomy that eliminates human oversight, or by establishing clear liability transfer rules — every Level 3 deployment is an uncontrolled experiment in legal uncertainty.
Sources: California DMV Autonomous Vehicle Reports 2023-2024; EU AI Act (Regulation 2024/1689); German Road Traffic Act Amendments 2021; South Korea Ministry of Land, Infrastructure and Transport Autonomous Vehicle Safety Standards (June 2023 Revision); Stanford Center for Automotive Research Human Factors Studies; Insurance industry data from Swiss Re and Munich Re 2024; U.S. District Court Northern District of California case filings; Interviews with industry and regulatory officials conducted 2024.
This is a Premium Article
Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.