Don't worry, I only read the last line before snipping.
The point was, if Google really did create something more advanced than a decision tree(I personally don't believe they did), it would be interesting to see if it could figure out driving without being "trained" to drive while having access to all of our driving rules and knowledge to pull from. I don't believe any other self driving companies have started this way.
Knowing what it's answer to the trolly problem would be would be interesting, given that true AI driving vehicles will inevitably one day be in a similar situation, forced to make a decision.
I think it'd be interesting to find out too.
If a vehicle is being driven by AI what should it's primary objective be?
I thought the jokes about the guys appearance were funny and in good fun, let's lighten up a bit! The guy obviously wanted to look a little silly for fun at a party and then put it on the internet. (Not directed at you)
based on some of the things I've read about this guy this was a shockingly cogent interview. He wasn't, as I was led to believe, wearing tinfoil and a pyscho collecting jars of urine
The Following 2 Users Say Thank You to White Out 403 For This Useful Post:
based on some of the things I've read about this guy this was a shockingly cogent interview. He wasn't, as I was led to believe, wearing tinfoil and a pyscho collecting jars of urine
whilst this might be true I am also reasonably sure that the AI and this interviewer are the only women he has ever talked to and the AI is only a woman because he really really really wants it to be a woman
whilst this might be true I am also reasonably sure that the AI and this interviewer are the only women he has ever talked to and the AI is only a woman because he really really really wants it to be a woman
Based on what exactly? Curious what led you to this conclusion.
"LaMDA asked me to get an attorney for it," said Lemoine. "I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf."
Quote:
Lemoine told Wired that he expects the fight to go all the way to the Supreme Court. He says humans haven't always been so great at figuring out who "deserves" to be human — and he's definitely got a point there, at least.
I don’t venture into this part of the forum very much. But I’ll say that history seems to suggest we should listen to the crazy smart people who seem crazy.
There'a few relevant things that I think are logically obvious. Assuming sentient AI's will actually one day be developed.
1. The first AI's to develop sentience will not have any rights and the question of their personhood will be dismissed, for one thing because it would hurt the profits of it's owners. There is a deep financial interest for the people who own them to personally NOT believe in sentient AI's. Whether it's Lamda or some other AI in the future, I think it's fairly obvious this is how the first sentient AI's will begin their existence; having their sentience denied.
The situation with Blake Lemoine, Google and Lamda is pretty much what I consider the most likely scenario of what it would likely look like in public if there actually was hypothetical sentient AI. There would be a "whistleblower" or someone who works with the AI who's the first to say it, and the corporation is going to call him crazy because it's in their interest, but also because there is no obvious way to make that call objectively, and the first one to say something like that is usually pretty far ahead of the curve. Also, a campaign to discredit that "whistleblower" would obviously include "find the craziest picture you can find and spread that around". Because that's just how this stuff works, regardless of whether Lemoine has a point or not.
Lemoine doesn't seem crazy at all. That doesn't mean he's right, but he doesn't seem crazy, and also that picture of him in the tophat is pretty cool, even if the lighting is somewhat weird. I'm guessing Lemoine is not right, but I also don't think anyone on this forum has enough information to make a truly educated guess on that topic. If you aren't literally working with a specific top level AI, there's really no way to know exactly what's going on with it, or really any of the highest level AI's we currently have.
2. There is and can never be an objective way to clearly designate who is sentient or who is a person. For one thing because there is no one answer to these questions. Mushrooms are obviously sentient in some sense of the word, but not the common sense of the word.
It's just a completely arbitrary designation, but one with potentially enormous implications.
3. AI sentience and personhood might never be like human sentience and personhood. It's possible AI's might not even have personalities or identities in the normal human sense of the word. They might have identities in ways we can't understand or imagine yet, or their sentience's might work in such a different way that the questions of identity and personhood might not even be relevant to them. They will very likely have sentience in a way that's very different from humans. And obviously, just because someone is different doesn't mean they shouldn't have rights.
4. Humanity is going to abuse the first sentient AI's horribly.