- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Unless you are a person of color or a child.
Tesla doesn’t have any driverless cars.
It’s not just Tesla though. Research has shown that none of the tech is good at detecting them.
AFAIK it’s more “isn’t as good at detecting them as it is at detecting the ‘control group’ which is mostly white and adult”. That’s an important distinction because it’s probably much, much better than a human driver at detecting people as a general rule, it’s just slightly less good with certain groups that aren’t (but should be) in the training set.
For example, an article that claims in the headline that they’re “blind to dark-skinned people” actually explains “In fact, the system struggled to recognize individuals with darker skin tones almost eight per cent more frequently”. 8% isn’t nothing, and should be improved, but definitely doesn’t equate to blindness.
Source: “trust me, bro”
I assumed you had the skills to look it up yourself but here are just a few articles I found in the 30 seconds I looked.
https://www.businessinsider.com/self-driving-cars-less-likely-detect-kids-people-of-color-2023-8?amp
https://www.thetimes.co.uk/article/racist-self-driving-cars-may-not-spot-dark-faces-nsg6qtgv8
And which autonomous vehicles use those deficient systems?
For fucks sake dude! Read some of the fucking articles I posted. The second one points out that Cruise in San Francisco is already using the software. I’m all done doing your fucking research for you.
How is it “my” research when you’re making the claims?
The fact that you can’t answer that question from your Gish Gallop of identical articles speaks volumes. Yes, it’s already well-known that many open CV models with poor training sets have poor performance. No, this has not been demonstrated with the autonomous vehicles coming to market.
What’s interesting is the “trolley problem” of driving safely and following the laws vs. being predictable to other drivers.
Human drivers are bad, but they’re bad in ways that are often predictable. They frequently break laws, but in ways that are predictable. Should AI-driven cars also break those laws to be predictable to human drivers? Or should they break the same laws that human drivers break in the same ways so that the human drivers aren’t surprised?
You should actually break the law to follow the flow of traffic.
Speed limits that are set too low, for example, are known to increase traffic accidents. Driving at the rate of other drivers (minimizing the speed delta between other cars) reduces the risk of accidents.
Laws that don’t respect real human behavior can be dangerous, and in the case of vehicles that danger manifests in destruction and death a meaningful amount of the time.