Quote:
Originally Posted by Titan2
Imagine the lives of those that wanted to report and were overruled. How horrific. All the better reason to cancel my ChatGPT. (Totally not what this is about I know but is a relevant thread)
|
Quote:
Originally Posted by Titan2
I understand, but that assumes they are moral people. But generally I think #### those people that saw a legitimate threat to public safety and insted of giving a heads up to the people that could intervene, they chose to do nothing for .... reasons? Mostly PR, I am going to assume.
|
Considering the discussion from the last page about the mental health, the shooter known to police, known threats prior and the obvious known history this family had within the community, to go out and suddenly start blaming and shaming a corporation for not alerting concerning discussions with an AI chatbot after all this is asinine.
Nothing.
Would.
Have.
Happened.
A report would have been largely dismissed as someone with mental health issues having an episode as soon as the identity was revealed. Unless there was a very clear imminent threat with clear planning with evidence of a mass shooting embedded within that chat, police would simply not have acted. Fantasizing to kill people in scenarios is simply not enough. According to the spokesperson, there was not enough there to meet the criteria (credible and immediate serious risk)
For the same reasons why nothing happened despite severe mental health calls that were severe enough it caused gun confiscations but not enough for remediation. For the same reason the shooter was involved in ultra violent media. There were a huge amount of known warning signs way before the chatgpt angle even came in play.
And what opendoor said last page is fully correct.
Quote:
Originally Posted by opendoor
The problem is you can't easily predict which people who struggle with mental health issues are going to be violent and which aren't, and you can't place strict conditions on everyone. Something like 20% of police interactions are with someone with a mental health disorder, which represents over 1 million Canadians each year.
|
And considering it's also known that the shooter also had a mass shooting in a mall Roblox game, and nothing was done as well, why is the onus solely on OpenAI here for indications of violent thoughts in an AI chat?
Sure in hindsight they should have reported it in the hopes it did anything, and the person who made the decision to overrule may have some remorse today and it would have been the prudent and responsible thing to do, but simply put, there is nothing here to believe that a report to police would have changed anything at all. We hear of this incident as it was revealed, but we don't hear of many others that do get reported behind the scenes.
There were plenty of reasons to enact preventative and protective measures already, we simply chose not to as a society. One more would not have changed the outcome.
OpenAI is an easy scapegoat considering all the other known transgressions and clear warning signs (so far). I highly doubt it would have changed anything "if only we had hold of those damn AI chats despite ignoring everything else that were huge red flags!".
Frankly I think we will soon learn a lot more that were much more flagrant and known without remediation.