Ya, like a self driving car is gongs to have to be safer than a human for us to accept them. I think the difference between an individual human making a mistake and AI is that a human is one, and the AI system is multiple. So if you took 1000 human drivers they'd almost all have a few things they just suck at, but those things would be different, for the most part. And you may have a few that are near perfect in their actions/choices. It's easier to accept one human making one mistake than it is to accept an AI making the same mistake over and over.
So I think it necessarily needs to be held to a higher standard, to that near perfect human, or an aggregate of all the good decisions those 1000 humans make. Essentially it's like AI can commit the mistake one human makes once, and we would be OK with that. But if it makes the same mistakes as 1000 humans, while also being as good or better on the positive side, it's still a complete failure. Which I think is fair, because statistically you will interact with a different fialing human rarely, but if the AI fails you will be interacting with it regularly, given it's reach.
|