Cars Shouldn’t Control Critical Safety Systems With Chatbots


Support CleanTechnica’s work through a Substack subscription or on Stripe.


In some ways, software-defined vehicles are great. By tying basically everything to a computer, you can gain a lot of control over practically everything. You can use a physical control (unless you opt for a cheapskate car company that eliminates nearly all of them), the car’s infotainment screen, a mobile app, or even voice commands and advanced large language models to control everything.

But, things can go wrong. If you want a perfect example of how this software-defined era is steering us into dangerous territory, look no further than a terrifying crash that recently happened in China.

At 1:00 AM on February 25, the driver of a Lynk & Co Z20 was cruising down a pitch-black highway. Wanting to dim the cabin, the driver issued a simple voice command: “Turn off the reading lights.” (Aka “map lights” in the US.)

The vehicle’s built-in chatbot misunderstood the command. Instead of just dimming the interior, the system executed a blanket command and instantly killed all vehicle lighting, including the exterior headlights! Plunged into total darkness at highway speeds, the driver frantically yelled at the AI to turn the lights back on.

The system’s chilling response? “This function is temporarily unavailable.”

Watch the video on Weibo here.

Because the automaker had chased the minimalist, screen-centric trend and stripped out the physical headlight stalk, the driver had no way to rely on muscle memory to flick the lights back on. Blinded and locked out by a confused chatbot, the driver ultimately crashed head-on into a highway guardrail.

Thankfully, no one was killed. Lynk & Co immediately issued a public apology and pushed an emergency Over-The-Air (OTA) update that revokes the voice assistant’s ability to turn off the headlights while the car is in motion. But in my opinion, the question we should be asking isn’t why the AI got confused.

The question is: Why did the infotainment chatbot have API access to the headlights in the first place?

The “Because We Can” Fallacy

Automakers, especially the more “tech forward” ones, are currently treating the vehicle’s CAN bus (the internal network that controls the physical hardware) like an open playground. In their rush to market a futuristic, “smart” cabin, they are hooking voice assistants directly into the vehicle’s core systems with essentially root-level access.

Because it is possible to let the voice assistant control the wipers, the headlights, and the glovebox, they do it. It looks fantastic in a brightly lit showroom demo. But it fundamentally misunderstands what an AI actually is: an unpredictable probability engine, not a hard-coded logic switch. When things go wrong (and they do!), the result is usually a minor inconvenience. However, it can be a disaster if a critical system is affected in the wrong way at the worst time.

Forgetting the Principle of Least Privilege

In the cybersecurity world, there is a golden rule called the Principle of Least Privilege. It dictates that a program should only be given the exact, minimum level of access it needs to do its job, and absolutely nothing more. Some automakers seem to have completely forgotten this rule.

An AI assistant is fantastic for handling complex, non-critical tasks. If you want to use voice commands to find the next available pull-through DC fast charger or change your Spotify playlist, have at it. If something goes wrong, you’re going to have to pull over and look at your phone or something.

When you’re driving and making dozens or hundreds of decisions every minute, your cognitive load is already maxed out. In those high-stress moments, needing to argue with a dashboard AI or take your eyes off the road to peck at a glass menu board just to trigger the windshield wipers isn’t just an annoyance. It’s a critical safety hazard. Muscle memory saves lives, and you cannot build muscle memory for a digital button or a voice prompt that malfunctions.

Defining the “No-Go Zones”

There needs to be something equivalent to an “air gap” in automotive software architecture. If an AI hallucinates, the worst thing that should happen is it plays the wrong song or routes you to the wrong coffee shop. If a critical feature is going to be accessible, there should at least be a physical control you can use to quickly override the function and stay in control.

There may even be a role for regulators here. Before we even begin to tackle the massive regulatory hurdles of self-driving cars, agencies like NHTSA may need to step in and define strictly enforced “No-Go Zones” for in-car AI. But we also need to balance this with avoiding stifling innovation, and leave the rules with some outs for future circumstances to avoid things like the ban on adaptive headlamps.

Until the auto industry learns to separate the tablet from the essential functions, things can go to dark places a lot faster than any driver might like.


Sign up for CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!


Advertisement

 


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.



CleanTechnica uses affiliate links. See our policy here.

CleanTechnica’s Comment Policy



Source link