View Single Post
Old 12-16-2024, 12:54 PM   #766
DoubleF
Franchise Player
 
DoubleF's Avatar
 
Join Date: Apr 2014
Exp:
Default

Quote:
Originally Posted by PepsiFree View Post
I don’t disagree with what you’re saying here but the thing people kind of ignore in all this is that people aren’t without mistakes either. People are prone to outdated thinking, simple mistakes, misguided advice, or narrow minded-ness. Many AI, right now, are at a human level. Call it a junior position. Knows enough to help, but makes some mistakes. You wouldn’t hire a junior whatever and just let them run off with zero oversight and never check their work. And, just like those people, the more you train the AI with what’s right and what’s wrong, the more you improve it.

Someone mentioned AI not being able to give advice that’s more about “intuition” than basic knowledge but, you know, it will. ChatGPT, right now, can learn a lot about you and how you think and make decisions, and functionally provide therapeutic or “life advice” based on that knowledge. This isn’t just analyzing financial algorithms and charting the best course based on that, it’s pulling from history, determining what constituted good advice and what constituted bad advice, and charting the most advantageous path based on your goals.

A lot of the things we think are uniquely human are pretty easy to learn. And a lot of jobs people think are special or take some unique human characteristic to perform well really aren’t.

And that’s just talking about the best of the best. Let’s be honest, people who pretend they’re some experts at their job might just be average in the grand scheme of things. And average is not going to be difficult to beat when AI is training on the best.
Yeah, but your post heavily emphasized point was that these products can basically be deployed at this stage with barely any oversight required. I don't disagree with the rest of your post, but I heavily disagreed on this specific point. As of a few months ago, these programs are still not ready. They're close, but I don't know how close it is where it can be considered primary reliable vs secondary reliable.

The biggest difference is that the AI will just spit out an answer without knowing the question was incorrectly posed. As much as there is a ton of human incompetence out there, either it's easy to identify the human incompetence, or the humans themselves will declare their incompetence and refuse the to do the work. AI will not do this. It will always think it is spitting out a really damn good answer, even when in reality, it should say, "Not enough information to conclude" instead.
DoubleF is offline   Reply With Quote