Quote:
Originally Posted by #-3
My understanding is that Tesla's safety issues will continually return to their insistence on not using radar, because of price. If they continue to use only Cameras to control their vehicles, regardless of the quality of technology they will continue to have accidents with optical illusions, lens flares, dirt/debris....
Good safety systems will have redundancies, Radar, and Lidar, and Cameras that will slow / shut the system down when there is conflicting information.
For what it's worth on my newer Hyundai I drive a lot with my hands off the wheel for 20-30 minutes at a time on the highway, and the car seems fine with it, but when it sees a sharp turn or something other than a vehicle driving straight in front of me, it asked me to touch the wheel. For the Lane Change, it will do it itself, but I have to have my hand on the wheel the whole time, so I don't really see the point. The Road safety features all feel pretty good, and pretty safe, but still often demand driver interaction. And the system does not navigate, which in itself would put it behind Tesla in the self driving level of advancement.
|
The safety issues aren't just down to the sensors. End to End "AI" has proven to be a failure, they just haven't admitted it yet. We know Generative pre-trained transformers (ChatGPT and the like) are essentially probability engines. They essentially make an educated(training data) guess at what the next word would most likely be in a sentence, or the next pixel in an image. We know they can be exceptionally good at this, but also extremely confidently wrong. Particularity when the training data is thin, or just incorrect, or outdated(Turns out Transavia does not, in fact, fly form Terminal 1 in Berlin anymore—thanks, Gemini...). The point is, it can never be perfect because reality changes, data is sparse, the world is not binary, and they can hallucinate.
What does this mean for FSD? It means it will always have a probability of being wrong. This can be reduced, but we see from FSD data they are stubbornly stuck at around 97% success per drive. Now, that's pretty good, but it's nowhere near safe, if you consider 3 out of every 100 times you get in your vehicle you may have a safety critical issue. It's not good enough. This is what typical progression looks like for this technology.
https://medium.com/@h.chegini/is-gpt...y-6ab94d422fa6
Reaching 100%, or 99.99999% is not a place these will get to. Fine for a chat bot, not so fine for a self driving car. Elon has said every car has the hardware for self driving since version 2, but then it didn't, because theyswitched to AI. Then he said it again, and it didn't, because they had to add bigger models and more data, whcih moved them up the curve. Then he said this time for sure. And now even HW4 is suspected of not being able to handle their newest models that will come form new more powerful training systems.
Without even getting into the sillyness of training a car on the driving patterns of every Tom, Dick and dumbass on the road(junk in, junk out), you have issues like different laws in different jurisdiction, road signs it can't yet read and interpret(playground zone hours?) and just random edge cases that can never be trained, you can see the direction is not one guaranteed to succeed.
So I'll leave you with this fun video of a Cybertruck driving itself.
Bonus short!