View Single Post
Old 12-14-2024, 01:36 PM   #735
curves2000
First Line Centre
 
Join Date: Dec 2013
Location: Calgary, Canada
Exp:
Default

Quote:
Originally Posted by PepsiFree View Post
It’s also amusing because people say “I used AI” as though every AI is the same and if you’ve used one you’ve used them all. Who is saying it’s perfect?

As I said, right now it depends a lot on the training, oversight and inputs. But you’re kidding yourself if you don’t think there’s an AI out there that can either do your job better than you can or is being trained to do exactly that, and will reach a point within very short order where the quality of inputs doesn’t matter. The next step in your career is managing the AI that does your job, until it doesn’t need you to manage it. It’s a sad reality, but it is one. You’re living with your head in the sand if you’re laughing at your Roomba thinking you can’t easily be replaced.

For the AI systems we’re talking about (without any idea of what you used), the more they’re used the more they learn. Everything they do right teaches them what’s right, everything they do wrong teaches them what’s wrong.

Anyways, that’s far enough down the AI rabbit hole. The point is that “handle this yourself” is not exceedingly terrible advice, it’s the opposite. Within five years at current progress people with be laughing at people who still use financial advisors. Your only hope is government intervention.

I am skeptical at how far AI will progress from a regulatory perspective and if it will keep up with the changes. We know a lot of the tech companies have really moved the needle with AI from their platforms but that is with virtually no guardrails in that business. It kind of is a free for all for Silicon Valley in that regard.

How does the AI explosion move onto protected and regulated industries and day to day life?? The technology exists for jumbo jets to actually fly and land themselves in existing form but we still have pilots for obvious reasons. The 737 Max 8 crashes resulted from a software and mechanical issue on the planes. The fact that the 2 crashes happened in poor countries mitigated the fallout but if 2 Boeing planes crashed in downtown NYC and LA the result would have been different.

How does the financial community deal with obvious tech, coding and algorithm issues? Would the firms be 100% responsible for billions in client losses in poorly executed trades as a result of some coding issue from some California based company?

Although not really AI related, I remember having debates with people about Bitcoin and how it was going to be a hedge against equity markets, inflation, bank fee's, losses, taxes, government interference/trackability and more. That really hasn't happened after nearly 15 years. Bitcoin is effectively a stock.

Certain technology and regulations just needed to be moved due to the ever changing landscape, that may apply to AI here very soon but in some businesses it will not be as fast as people think. It was 10 years ago that everybody was convinced we would all be in driverless cars in 2025. Email was suppose to eliminate paper. Instant messaging was suppose to eliminate email. During Covid the thought of going into a physical office again to type on a screen and answer emails seemed outrageous but hundreds of millions of people are being forced into doing that again 2 years later.

Everybody is jumping onto the wind and solar tech as a way of the future, odds are we will all have mini nuclear reactors powering our cities.

Everybody thinks and knows Tesla as some big AI company and that may be true but the truth is, Tesla car's are also very poor quality. They are moving piles of high tech, problematic headaches.

We have moved forward a lot as a society in the last 50-100 years but not to the degree that our ancestors did during similar time frames.

Off topic post so apologizes to all about that
curves2000 is online now   Reply With Quote
The Following User Says Thank You to curves2000 For This Useful Post: