Remember the last time Musk tried to get Grok to say the things he wanted it to say, and then that was basically all it could talk to anyone about? Ya, I guess he hasn't learned the lesson there.
Quote:
The self-styled “maverick” chatbot made headlines in mid-May after it began to issue unprompted responses on “white genocide” in South Africa. Addressing the controversy, xAI posted on X that the responses were caused by an “unauthorized modification” that “violated xAI’s internal policies and core values.”
|
https://tech.co/news/list-ai-failures-mistakes-errors
Presumably those core values are "be less obvious" when it comes to deception.