Quote:
Originally Posted by Fuzz
Tesla tracks two types of interventions, one is non-critical, such as you got to the parking lot, and you decide to turn left instead of right, or it was in the left lane, and you knew the exit right gets backed up, so you take over. The other are critical safety interventions, and yours would count as one, even if the other driver mitigated it. So yes, the actual number of collisions would be less, but I was using it as an example of the percent needed for non collision drives. You need that number higher than 99.9%. Waymo recently released they go 17k miles per disengagement. Most of their driving is also urban. Tesla's data mixes in highway autopilot, which are basically advanced driver assistance tools, so you can't compare their official number since it is not just FSD, and includes a huge amount of highway miles. They say it's 7 million per collision, but I think that's fairly meaningless to compare to other companies.
We will finally get public data now that they've launched their driver supervised ride hailing service in California, and that license requires them to publicly report interventions. Waymo started this in 2015, and removed the driver in 2020, for an idea where Tesla may be at in progress. I suspect Tesla won't wait as long to try it. In Austin they have different rules.
|
Oh for sure - I don't disagree with your thesis, just the math there.
I think Tesla's path to solving this problem is fundamentally flawed.
I was blown away with Waymo in SF - the decisions that it was able to make (which gaps to take, etc) always seemed reasonable and not even overly conservative. If I was writing self-driving software one way to make it not have accidents is to never make left turns, wait for huge gaps, and just generally drive very conservatively. The Waymo's never sped or broke the law, and were less aggressive than the average Uber driver, but it was definitely a reasonable level of "we need to get there in a timely fashion".