Reminds me of a few years ago seeing a test around the AI decisions self-driving cars would make if someone was in front of them. Basically in a bunch of different scenarios you would need to decide between mowing down a group of pedestrians or crashing the car into a brick wall killing the drivers, so really just a variant of the trolley dilemna. Scenarios varied based on the number and composition (age, race, social class) of pedestrians, number of people in the car, and whether the car kept going straight or swerved.
One of the criteria I used was that I would not sacrifice the drivers of the vehicle to protect people who were breaking the law crossing the street (i.e. jaywalking or crossing when the car had a green light). The automated results the test generated didn't even seem to contemplate that as being a criterion you would use, so maybe I'm in the minority with that view. Instead it interpreted my results as favouring protecting high income people, which didn't come into my decisions at all and was just a random result due to the limited numbers of scenarios in the test.
|