Tesla has a history of using remote diagnostics to snitch on drivers instead of allowing investigators to do their job.
It’s a valid concern that your car manufacturer will use the interior cameras to blame a crime on you.
These automatic tagging features also bring to question the bias in algorithms. What does this mean for drivers with disabilities? Is it possible that a safe driver with ADHD or Turrets etc might be flagged as inattentive during an autopilot crash and therefore at fault?
This is a legit concern. There are many edge cases with humans.
I worked at a pizza place in high school. There was a delivery driver with cerebral paulsy who was constantly being accused of being drunk. Sometimes customers just called the cops on him. It was sad, he was a super nice young guy already on a short clock and he had to deal with that all the time.
> What does this mean for drivers with disabilities?
You mention ADHD and Turrets; a leg-disabled person (e.g. paralyzed waste down) can still drive just fine, special hand-operated equipment is installed to a car to allow the hands to take over for the foot pedals.
By design this driver may be looking down frequently at their controls (let's assume not everyone is a pro driver yet and knows them by heart) which could easily be misconstrued as looking down at a mobile phone, placing immediate bias against them on video before the facts of hand-controls are revealed in court.
> By design this driver may be looking down frequently at their controls
Actually not, for hand controls. No more than you look at the pedals. It goes into muscle memory just as much.
I have been a passenger with two different hand-control drivers. Both were as smooth as a driver using foot pedals. I once moved their car for one of them —- in a tiny town on a quiet street, for about 5 blocks at low speed. It was difficult for me — zero muscle memory and zero “feel” for the car. It does take some practice, I will give you that, but you never have to look at the lever, even as a noob.
Besides, I don't think there's any more comfort or lessened stakes to be had knowing that the driver that t-boned your kid was looking down at their prosthetic leg or custom controls instead of a smartphone. Or in the middle of an autistic break or epileptic fit or attention deficit fugue or narcoleptic episode.
Fine, the software's enum value of "EYES_DOWN" doesn't quite capture the domain of nuance of all the reasons eyes might be down. But it doesn't necessarily matter.
While I agree with the concern, I think there's almost zero chance that a neural net trained on people looking at phones would classify a driver looking at hand controls as that same thing. The neural net would more likely have another classification of "unknown attention", being unable to conclusively classify the driver's attention.
Is there any incentive for the drivers to not just tape over this camera? As far as I see this information will almost certainly only be used against you. But then again, perhaps taping over the camera will be used against you as well.
The OP is suggesting neuroatypical people might be flagged as being inattentive despite being attentive due to being underrepresented in the training set.
I think the question was asked with the implied addition 'What Tesla says won't matter in court.'
I don't agree with that though, for two reasons.
First, it will be harder for them to fight, meaning it will still be a source of bias even if the court isn't biased at all. Even if they can show it was a false flag, that is time and money others wouldn't have to spend. Potentially time and money of someone who was in a life altering crash.
Second, it would not be surprised that the court ends up being biased towards the manufacture and would give undue weight to them because they have the fancy algorithms and all the well payed lawyers.
> It’s a valid concern that your car manufacturer will use the interior cameras to blame a crime on you.
Don't see when they would ever be responsible for a crime legally, outside of being 'blamed' in dozens of national news headlines; the current legal landscape has continued to uphold that you are responsible for the car even when it's trying to drive itself.
Even with Navigate on Autopilot, you have to continuously apply force to the wheel to prevent it from nagging at the driver. The feature mentioned in the article would be used if regulation increased to require the car use a camera/similar system to determine if a driver is inattentive while autopilot is enabled.
"Your hands are just there for regulatory reasons"
Then turns around and blames you for acting like that's true when a known defect kills your wife
(aside: Why do you think Mobileye had the same tech as AP1 in multiple other cars but never turned it on continuously? Why do you think it was always limited to highway speeds and correcting a lack of input? Well that's because it can't tell an overhead sign from a stopped firetruck very well...)
Are we really then allowed to go back and go "well they had a nice disclaimer".
-
It's funny really, because calling it Autopilot wasn't enough (spare me the airplane comparisons). Now there's a feature list called "Full Self Driving... capability"
Like, the term FSD is used as part of a label for a collection of features, of which NOT A SINGLE ONE is even a COMPONENT of self driving, because EVERY SINGLE ONE requires full time driver attention.
They're literally using the fact that theoretically, one day, the SENSORS MIGHT OVERLAP WITH WHAT FSD REQUIRES to sell it as... not FSD. FSD capability.
What a joke.
-
Tesla has gotten away with homicide, and I guess we're ok with them continuing to.
The useless comparisons about "Tesla with AP is safer than Tesla without", show me the comparison that Tesla with AP is safer than AEB+LKA+ACC (pretty much AP, minus the party trick that lulls drivers into a false sense of security), and I'll show you a made up study.
Because that would be AP being exactly what it already is, minus the one mode where the driver is supposed to be a safety net for the computer. Where the driver already has responsibility to make sure it doesn't crash! It's literally like removing the ability to steer your bike with the training wheels, the training wheels are still there!
It's simple really, Tesla has put marketability of a safety feature, over the actual safety benefits, at the cost of human lives.
I agree that the marketing is misleading and unreasonable but I think the last part of your comment is going too far.
I drive a car without any of those features. I think my car's level of safety is acceptable. If AP can beat that level of safety, it's good enough. It's okay if AP isn't the absolute safest mode, just like it's okay if companies sell cars that "only" have a 4-star crash rating.
Exactly this. People are expecting car "autopilots" to exceed all human drivers in every situation, which is a completely silly place to move the goalpost to.
I see it more as an assistant that makes us all better drivers by letting us concentrate on stuff the computer can't do.
Even my Hyundai can drive automatically perfectly well at highway speeds and follow the road and even slow down with the traffic. This lets me relax behind the wheel and concentrate on other things than keeping the car in the lane.
I feel I'm a lot less tired and more anttentive when driving with an "autopilot", all the little micro-adjustments needed vex on my brain like nothing else.
I agree, but I interpret the argument as such: Tesla, by using language like "your hands are only there for regulatory reasons" makes no effort to dispel the myth to the driver that the car autopilot can exceed all human drivers in every situation. It aught to be on Tesla's head to instruct the driver that they are the safety net to the computer.
To use the aeroplane analogy that Tesla wants to market: the computer is the first officer in charge of flying the plane, the driver is the captain in charge of everything else.
Yea, the Tesla marketing team has gone a bit overboard and some people are taking their spiel as gospel.
On the other hand, event the new VW ID.3 can drive on regular roads, manage roundabouts and adjust speed according to speed signs, they're just not making a huge fuss about it.
No one is saying AP should be the safest thing ever, my comment certainly isn't saying that AP should be some sorr of super-human driver
The problem is AP is actively making itself less safe than it would be without marketing oriented features even with no additional strengths.
AP could the same collision avoidance and lane centering capabilities, but not allow the driver to engage them in "Autopilot" style. That's exactly what the AP1 hardware did in other cars.
AP would still avoid every accident it could before, but now additionally not kill people when it fails to uphold the very strongly implied (even if legally disclaimed) promises it's marketing and creator love to make.
-
AP has a set of convenience features and safety features.
The convenience features all come with the disclaimer that the human must be attentive at all times while they're engaged.
And in cases where the human is not full engaged because they're placing too much trust in the convenience features, suddenly the convenience features are creating a less safe situation than if the driver did not have them at all, since the driver is now driving distracted.
The safety features cannot close the gap between a distracted driver and a non-distracted driver, that's why they're not seld-driving cars.
-
Or to put it another way, good enough is good enough for the safety features. But "good enough" is actually the most dangerous level of the convenience features, since they lull operators into a false sense of security with deadly consequences.
That's why Google abandoned their equivalent project (which Elon spoke to them about before developing AP. Google's project was called... Autopilot).
It was demonstrated that giving drivers that sort of "good-enough" convenience feature would be encouraging distracted behavior. But that didn't stop Elon.
-
In fact, thinking about it, how sad is that? Google used cameras in their vehicle in conjunction with AP features and proved that people would misuse the system.
Knowing that Elon was in contact with them about the program, isn't it such an odd coincidence that Elon has pushed back on driver monitoring?
It's almost like he knows exactly what it would reveal, but doesn't want to invite the trouble that comes with tracking such an inconvenient truth.
> The problem is AP is actively making itself less safe than it would be without marketing oriented features even with no additional strengths.
That's a shame but I'm not going to worry all that much about it killing people if it's still safer than a basic car.
> Or to put it another way, good enough is good enough for the safety features. But "good enough" is actually the most dangerous level of the convenience features, since they lull operators into a false sense of security with deadly consequences.
No. We were talking about the final level of safety after factoring in the hubris-caused danger. The level of safety before factoring in hubris was much better than "good enough".
> That's a shame but I'm not going to worry all that much about it killing people if it's still safer than a basic car.
This doesn't make sense unless you just like senesless death.
It can be safer than a basic car and not kill people.
Or it can be safer than a basic car and kill people for the sake of marketing.
Since like I said, AP should not leave you feeling rested, full attentiveness is required, more than usual. If you're operating under the assumption you can relax a little with it active like a co-pilot or watcher, you're following for it's most dangerous trap.
-
Also you realize basic cars have been picking up the safety features for a while now right?
The Toyota Corolla, the baseline of basic cars, has had LKA for 3 or 4 years now? And will warn a drowsy driver in case of excessive interventions.
It's sad again, because if anything Tesla is poisoning the well on these features since they're lulling people who don't even have the cars into a false sense of security.
AP marketing makes it seem like it can make a drive more relaxing (with Tesla salespeople using that exact phrasing on a test drive mind you) when the moment it does that, it's making you less safe than if it wouldn't pretend to be capable of that.
Now when other manufacturers intentionally omit continuous lane centering, it seems like an omission of a convenience, not an omission of a dangerously misleading liability to the safety of its users.
If the car only saves 20 lives when it could have saved 70 lives, I don't care if you want to call that "killing people", I think that car is fine.
I don't "like senseless deaths" but I think it's acceptable if people like the former car better than the latter car and purchase more of it.
> If you're operating under the assumption you can relax a little with it active like a co-pilot or watcher, you're following for it's most dangerous trap.
If it's still safer than a car without those features even after falling into the trap then my reaction is a big shrug.
Here the car saves 100000 lives, but kills 5 people, when it could have just not killed the 5 people! And still saved the same 100000 lives!
That's a problem! To any normal person, that's a problem.
You're doing this weird math where actively murdering 5 people but saving 100000 is the same as just saving 99995, but it's not when the 5 murders were not a requirement to save 100000.
Maybe your confusion is thinking the safety features are the ones doing the killing when they're not? It's extra trimmings built on top of the safety features that are doing the killing.
In other words:
> If it's still safer than a car without those features even after falling into the trap then my reaction is a big shrug.
It's not safer! Because that sentence is about the convenience features are not the safety features!
LKA, AEB, Crash avoidance, those all operate separately of the convenience features! And it's not a guess about how the internals work either, Tesla literally sells things like "Navigate on Autopilot" separately.
It's really simple logic. The safety features save lives. The convenience features, by definition, cannot save lives that the safety ones didn't.
When a Tesla avoids hitting a car coming into it's path, it's not because Navigate with Autopilot was on, it's the Active Safety Features function:
But when a Tesla hits a stopped firetruck after Autopilot lulled the driver into thinking they wouldn't have to brake, the active safety features can't do anything! Because the convenience features are subject to the same limitations as the active safety features.
---
That's the crux of the issue that you keep missing. No one is saying the active safety features should save more lives, they're saying the convenience features need to stop tricking people into driving distracted. Because those people sometimes die in accidents that wouldn't have happened if they didn't have them.
The safety features would still save the same number of people, just no one would die needless deaths in the name of marketing and looking cool.
> You're doing this weird math where actively murdering 5 people but saving 100000 is the same as just saving 99995, but it's not when the 5 murders were not a requirement to save 100000.
I don't see it that way.
It's a completely random set of people that die either way. And all the deaths basically boil down to "driving is dangerous". It's not one set of people being saved and a different one dying. It's all one group, so only the total number matters.
> It's not safer! Because that sentence is about the convenience features are not the safety features!
Your sentence was about that. My counterpoint was about more than that.
I'll reword my point without the word safer: If the risk reduction from the safety features is bigger than the risk increase from the convenience features, I think things are acceptable.
> The safety features save lives. The convenience features, by definition, cannot save lives that the safety ones didn't.
I wasn't trying to say the convenience features saved lives.
> No one is saying the active safety features should save more lives, they're saying the convenience features need to stop tricking people into driving distracted.
I'm the one saying that safety features need to save a certain number of lives. And that number is "enough to make up for the risk caused by the convenience features".
> Because those people sometimes die in accidents that wouldn't have happened if they didn't have them.
Eh, someone that's distracted could easily die in similar accident in a dumb car. To me it's all risk vs. risk. Nobody is getting murdered by the car as some kind of sick trade-off. It's not a trolley problem. It's just undoing some of the safety. But undoing the safety is acceptable, just like a car that had none of those safety features (and none of those convenience features) would also be acceptable.
> The safety features would still save the same number of people, just no one would die needless deaths in the name of marketing and looking cool.
If it was just marketing and looking cool that would be one thing. But it's also quite hard mentally to pay attention when everything is on auto. And it's a much nicer driving experience that's inherently dangerous no matter what the marketing says. So I don't think it's possible to have the convenience without the danger. You'd have to disable AP.
> It's all one group, so only the total number matters.
You're very alone in this line of thought.
If you save 100000 people _by murdering_ 5 people, it's unfortunate and a hard choice, but at least arguably good. Most people will be ok with this (this is pretty much how the world turns to some degree)
But if you save 100000 lives, then murder 5 victims. And those 5 victims were just murdered to murder tham, no one being saved was resting on it.
Most basically ethical people would consider that indefensible.
Maybe you're confusing this situation with the former, where lives are treated as fungible to save great numbers.
That can be tricky ethically, but at least the greater good is on your side. Here you're treating lives as fungible for the sake of it.
It's not really an admirable thing to lack the ability to see why that's wrong, so I implore you to dig a little deeper.
This isn't sacrificing people to help more people, it's helping people then sacrificing some people for fun.
If you talk someone down from a ledge do you now go around thinking you get to murder someone and it cancels out?
> If it was just marketing and looking cool that would be one thing. But it's also quite hard mentally to pay attention when everything is on auto. And it's a much nicer driving experience that's inherently dangerous no matter what the marketing says. So I don't think it's possible to have the convenience without the danger. You'd have to disable AP
This is literally what every comment in this entire thread by me has said. Maybe you finally get it.
Car without convenience features saves lives doesn't kill.
I basically agree with everything you said about saving some people and murdering others.
But "saving less people" is a totally different situation. If I donate some money to charity, and I'm asked to donate more money, it's okay for me to say no.
So the question is whether these features fall under "saving some people and killing others" or if they fall under "saving less people".
You believe it's the former, and I believe it's the latter.
Remember that even pure safety features are imperfect and they are subject to the butterfly effect. Anything you change about a car design will cause some shift in who dies while driving that car. You have to determine if there are really two separate risk groups, one sacrificed for the other, or if it's all really one risk group.
As a thought experiment, imagine we made a car that was all-around much safer, and then we let 14 year olds drive it. Well that actually happens, with mini-cars that have strict speed limits. (Usually they're legally not cars.) We could only allow normal licensed drivers to use them, and they'd be very safe! But then, purely for convenience, we let younger teenagers use them. Is that a moral abomination? Some of them are going to die. But I think it fits into the normal risk of driving, and as long as a 14 year old in one of those is safer than a normal driver in a normal car, I'm satisfied.
> So the question is whether these features fall under "saving some people and killing others" or if they fall under "saving less people".
There are separate sets of features that you can't just glue together, they're not even offered together, you have to pay separately for the second set.
One set saves people, one set kills people. The set that kills people, doesn't save people.
You think it's ok to sell the set that kills people because they have a set thats save people.
I know it's not ok to sell features that kill people and don't save anyone. It'd be one thing if it could save people, but it doesn't. The other set does that.
> You think it's ok to sell the set that kills people because they have a set thats save people.
Only if all three of the following are true:
A) they both affect the same risk factor (Just because they're "separate" features and sold separately doesn't make this false. And the way the autopilot is so intertwined with the others, I don't think it's really fair to say they are truly separate features. They build upon each other.)
B) it's impossible to run the dangerous one without the safety one
C) running both is safer than running neither
If you want to make cars safer, then you should be insisting that all cars be safer. The car that kills people because it lacks any of these electronics is just as bad.
To elaborate on (A), with these particular features I don't see the car as causing distinct people to die compared to who it's saving. It's not trading between different groups of people in any meaningful way. It's just a single number of how many people die per mile. With other features you could get into that moral dilemma. With these features I don't think it does.
I don't think it's going too far. There are certain features of a vehicle for which the manufacturer could be forgiven for using borderline deceptive marketing to move the product. Yet "Autopilot" that isn't capable of fully replacing a driver, is marketing doublespeak that has contributed to the deaths of Tesla motorists who ostensibly believed that "Autopilot" actually means autopilot.
Tesla & Co have used telemetry for PR to save face for their autopilot features, independent of the courts. If your car goes through a green light and gets T-Boned but the car flags you as being inattentive, it would be an interesting case, because without the data it would be clear that you were not at fault in any way.
> These automatic tagging features also bring to question the bias in algorithms. What does this mean for drivers with disabilities? Is it possible that a safe driver with ADHD or Turrets etc might be flagged as inattentive during an autopilot crash and therefore at fault?
I'm not understanding your point here, and you seem to contradict yourself a little bit.
A driver with a disability that makes them inattentive can't be a "safe driver." Having a disability that makes them inattentive, appearing inattentive, and getting into a crash (autopilot or not), seems to be a "three strikes and you're out" situation, IMO. You'll have a very hard time convincing me the person wasn't at fault in that situation.
Certain disabilities prevent people from driving cars safely. It's unfortunate, but it's just the way it is.
>A driver with a disability that makes them inattentive can't be a "safe driver."
A driver with a disability that makes a computer system classify them as an inattentive driver is not the same thing as a driver with a disability that makes them inattentive.
This could certainly cause the UI to falsely show alerts or refuse to enable the autopilot.
But when it comes to actually reviewing a case in an accident, you would want to review the actual footage, not just the metadata.
Looking at just the metadata is already problematic. For example, the “hands on wheel” metadata in a Tesla is based on angular force detected on the wheel, not actual hands on the wheel. Tesla will report that the driver didn’t touch the wheel in X seconds, when it would be more accurate to say the driver didn’t apply angular force, although their hands might have been resting on the wheel the whole time.
Safety is in the sum of all things, AKA performance. An ADD person may drive a car safely and we might not know why; what we do know is performance.
If an ADD person does then get into an accident, should they endure higher civil and criminal risk, regardless of prior performance and in light of their condition?
What do you mean by snitch? I think it's a perfectly valid reason to "snitch" if the driver was truly at fault.
There's some great reasons for driver-facing cameras- and if one of those trade-offs is that drivers are less likely to "get away" with causing a crash, then that's fine in my books.
Bias feels like a different argument entirely, though I do share your concern about it...
If the data could be used against you, you should also have access to the data so it could be used to support your case.
When there is asymmetric access to data, it's inherently unfair because the side with greater access will cherry-pick the data that supports their side.
For example, there might be some angle, some time window, some sensor, etc. that makes you look attentive, while a different angle/time/sensor might make you look inattentive.
So, turns out a driver is more likely to be snitched on if there's not a piece of tape covering the camera.
So, it's got a bias against people who don't cover the camera.
(Full disclosure: I have not yet covered mine. But one of the many reasons I do not install dash cams in my cars is that I am not interested in having evidence against me when I start shooting up other cars in traffic [1])
>What do you mean by snitch? I think it's a perfectly valid reason to "snitch" if the driver was truly at fault.
The driver is always truly at fault in the current legal environment. A driver can't just blame it on the car to escape liability. The comments here talking about the legal ramifications for these tags are misguided. For these tags to matter, there would need to be legislative changes to shift the liability to the autonomous system and that would likely only happen once we reach at least level 4 autonomy. Plus once we get to that point, whether the driver is paying attention or not shouldn't matter anyway.
> The driver is always truly at fault in the current legal environment.
That's...not entirely and exclusively true.
> A driver can't just blame it on the car to escape liability.
If it was provably due to a manufacturing defect that the driver could not reasonably have known about, I think you are wrong.
If a manufacturing defect is involved, then whether or not the driver is also liable, the manufacturer and every party in the chain of commerce (both upstream—such as suppliers of the defective component—and downstream from the vehicle manufacturer, such as dealers, etc., which are mostly irrelevant for Tesla) is liable.
You are right about the exception of a defective product. I should have included that disclaimer in my original comment. However defective is an important qualifier. Autopilot is not designed to be the primary driver of the car. A Tesla on Autopilot rear-ending another driver is no more defective than any other car on cruise control rear-ending someone. Autopilot would likely need to overrule human action to be considered defective. I was assuming that is an unlikely enough case to avoid mentioning it. Even if that did occur, that would be evident in other logs the car keeps so this attention monitor wouldn't be relevant.
Before you go all conspiracy theory, try reading the actual article. It's an opt-in and the data is not recorded nor uploaded to Tesla without an opt-in control.
We're like 300 yards down the slippery slope and gaining speed fast. Today's "It's opt-in and isn't being recorded" is tomorrow's "If you don't agree to allow us to use this video in any manner we deem fit, or if you leave 4G+ coverage for an excessive period of time, your car may not start".
Hah, if the main barrier to autopilot for cars is liability and insurance, passive biometric monitoring of the driver is certainly one way to shift liability around.
The real feature should just be a button that shifts insurance liability to Tesla for the period of the autopilot engagement and charges you a floating premium for it based on traffic conditions per mile/km while the autopilot is on.
The real problem is "free" auto pilot means people will use it often enough to make a catastrophe an actuarial inevitability, whereas if they have to pay to text while driving on autopilot, they're going to do it less, and live long enough to be killed by something else.
Hm... are you suggesting that Tesla's motivation here might be to develop a logging system to defend itself against liability claims by demonstrating that the driver was not paying attention, as opposed to a system to take measures at the time to prevent the crash?
I would regard doing the former, while not the latter, to be deeply unethical, as, at least in my book, motive matters.
I am not saying thet I expect Tesla to do so, but it is possible that the incentives could be to do just that: for example, if it turns out to be impractical to produce an effective warning and intervention system that is not regarded as too intrusive by its customers. Don't take that route, Tesla. Nothing but either true (not necessarily perfect) self-driving, or effective monitoring and intervention, is acceptable.
Not really, the system primarily exists for two reasons:
1) in case future regulation requires "monitoring if the driver is paying attention to the road"
and
2) for their robo-taxi service that might or might not come out within the next 5 years as Elon says (hint: it will not)
I own a Tesla and would love if this internal camera were to be available to Sentry mode/saved with dashcam footage (for insurance reasons, internal dashcams are great for strengthening the driver's case), but that's not possible currently.
Why? Did they suddenly engineer themselves into a new type of corporation that doesn't drop its ethics at the first opportunity?
I have nothing specifically against Tesla, but I fully expect them to simply do whatever if they are incentivised by the right profit structure. The "best" corporations have disappointed me in this manner, no shame on Tesla for existing, but that is what it will do, given the chance.
"Tesla" is not your human friend that you can talk to or appease in the hope that it won't do what is its nature. Talking into the void won't tame it.
> The real feature should just be a button that shifts insurance liability to Tesla for the period of the autopilot engagement and charges you a floating premium for it based on traffic conditions per mile/km while the autopilot is on.
well this seems inevitable now - the amount you pay out of pocket for body damage repairs depending on the machinations of high-speed-trading algorithms acting on the sensor data streaming out of your car. not just the price of autopilot fluctuates but every non-essential function distracting enough to influence the actuarial tables. want to listen to music? in this traffic? too expensive!
You mean that you'd have the choice between using autopilot as is (with you having the liability, not allowed to text) or switch to Tesla having the liability and be free to text?
That would require the system to be safe enough to text while using it.
If you mean that the feature should always be pay-to-use, that seems ridiculous. Why try to minimize the use of a driver assist feature?
A car you can drive drunk is the only meaningful definition of a self-driving car, imo.
I mean both, and specifically that driver assist should be pay to use to offset usage because it is likely higher net risk for a collision than a human. Free driver-assist has a "tragedy of the commons" problem, or a moral hazard, where there is no cost to over-using it and courting a collision event, so this overall risk can be reduced by metering its use.
Also, yes, the system should be safe enough to text while using it. That's what "self driving car" should mean. Another criteria would be that a person should be able to use it with a blood alcohol level higher than is legal today as well, and that it absolves people of an "impaired driving" charge.
There is a division between driver assist and full autonomous self driving, but these are features that could be pay per use based on relative actuarial risk.
To people who say you can't put a price on human life, clearly you do not have auto insurance?
> A car you can drive drunk is the only meaningful definition of a self-driving car, imo.
That's kind of a messy definition. A drunk person might start messing with the controls, and a car that is perfectly able to drive itself might not be able to handle that. You could fix that with a breathalyzer but it would be ridiculous if the difference between self-driving and not-self-driving is the presence of a breathalyzer.
So I would say something more like "drive from the passenger seat" or "drive while asleep".
Technically I would agree with you, but economically, why would anyone drive from the passenger seat, and almost nobody will look forward to sleeping on the road - but my drunk definition applies to everyone in the world who drinks wine, beer, or spirits.
A car you can drive drunk is the problem the tech solves, which is what makes it both viable, and provides success criteria instead of making it just about unreasonable hypothetical standards of perfect safety.
The entire ML and machine vision endeavour should be called what it is, a moonshot to create a drunk driving car.
It’s a valid concern that your car manufacturer will use the interior cameras to blame a crime on you.
These automatic tagging features also bring to question the bias in algorithms. What does this mean for drivers with disabilities? Is it possible that a safe driver with ADHD or Turrets etc might be flagged as inattentive during an autopilot crash and therefore at fault?