Sure, I'm just saying when it's sold as AI that is vastly overselling it. I wish we could have kept the term AI for actual AI, and used something else for LLMs.
I wonder how an LLM optimized for Chess would work. Chess involves think a few moves ahead, but LLM's are typically more short(next item) predictors from what I understand. We also know they have very little spatial reasoning ability, which seems to make a Chess board a challenge. But given it's ability to handle millions of tokens, perhaps it could hold all possible game states choosing the one for each situation, and fundamentally "solve" Chess.
I was actually more interested in the emotional responses it had, though. Do we want "AI" that makes excuses for it's failures, even when it's proven the excuse was BS? That seems to reduce trust.
|