11-13-2025, 11:07 AM
|
#21
|
|
First Line Centre
Join Date: Feb 2002
Location: Normally, my desk
|
I use it everyday. It should be a worry for people. It saves me a lot of time and the better I get, the more time I save.
I think where the caution lies is that I've been doing my job for 25 years. I find mistakes within the outputs. All the time. For someone just learning the ropes, who might be killer at getting AI to generate a solid looking outcome, they don't know what they don't know and those mistakes will filter through.
Just wanted to add. I'm as far from an IT nerd as you can get.
Last edited by Leeman4Gilmour; 11-13-2025 at 11:09 AM.
|
|
|
|
The Following 4 Users Say Thank You to Leeman4Gilmour For This Useful Post:
|
|
11-13-2025, 11:08 AM
|
#22
|
Participant 
|
Quote:
Originally Posted by AFireInside
I do love that people who use AI say things like "learn how to prompt really quickly" as if that's going to save their job. That's not a difficult skill to learn, and if that's what you're being paid for you're going take a massive pay cut, or be replaced.
Overall AI is going to be very bad for humanity, there will be some positives for sure, but I don't see how it will be a good thing in the grand scheme of things.
|
I’ll preface this by saying yes, it’s a bad thing and is going to do a lot of damage. But given that individually there’s not a whole lot we can do to change that, the question becomes how do you leverage it to increase your own value?
Like it or not, learning how to prompt and leverage AI for those purposes gives me and advantage over you, if we’re doing the same job and you aren’t doing those things, because I’m no longer trying to save my job, I’m changing my job while the job we were both doing is being replaced.
If you’re at the executive level or you’re mid to high level with 10+ years of experience you’re more likely to benefit from AI in the near term than juniors and people just getting into industries that AI works well in. Experienced copyrighters and designers are going to become editors, creative directors, and AI managers. Unexperienced ones are going to struggle to find work. Teams are going to get smaller and outputs are going to be expected to get better, more numerous, and faster. Yeah, if you expect you can produce the same and do less work you’re just going to get paid less. So don’t do that.
It’s not just about learning how to prompt. It’s about understanding and knowing what makes high value outputs.
So the question for people anxious about it is how do you position yourself at the top instead of the bottom.
|
|
|
|
The Following 3 Users Say Thank You to PepsiFree For This Useful Post:
|
|
11-13-2025, 11:37 AM
|
#23
|
|
Franchise Player
Join Date: Feb 2010
Location: Park Hyatt Tokyo
|
Quote:
Originally Posted by bluejays
Sure, and I'm sure you're right. But it's getting better at such a fast pace. There will be hiccups along the way but it's here.
How it starts and how it progresses is interesting. Let's use the example of a doctors office with 10 doctors.
Today - let's use it for listening to the patient/doctor conversation and let it spit out a summary for charting the conversation and next steps. Saves an hour per day per doctor (10 hours saved per day, 50 hours saved per week). With that hour saved, we don't need 10 doctors, instead we just need 9. No need to backfill when a doctor leaves.
2 years from now - let's introduce an automated triage to patients waiting in the waiting room to describe their symptoms, and have the AI give an initial diagnosis/next steps recommendation (to the doctor only). Doctor sees patient with information in hand and potential diagnosis already lined up. Saves 30 minutes more per day.
5 years from now - let's combine both the triaging and in-room conversation to the point where the doctor signs off on the AI diagnosis/next steps. Saves 4 hours a day per doctor. We only now need 4 doctors, and no student to take an initial information.
Anyway, that's just my rough take on examples how these things progress. To some it's great in the sense that you'll get your diagnosis faster and more efficiently, but for society as a whole there are so jobs not needed that the disparity climbs between the haves and have nots. That's my problem with this. A lot of people will eventually be out of jobs.
|
5 years from now is probably 2.5 years from now. 5 years from now, you'll do all of that through an app and the AI will provide diagnosis (with outsourced MD "verification"). Blood work and similar in person physical tests will be all you'll see a person for, that is until you go to a faceless clinic, scan a QR code and stick your arm in a robot.
|
|
|
|
The Following User Says Thank You to topfiverecords For This Useful Post:
|
|
11-13-2025, 11:44 AM
|
#24
|
|
Franchise Player
Join Date: Feb 2006
Location: Calgary
|
What I can't wrap my head around is how this will actually work to the benefit of society as a whole in the future.
Money to me, is ultimately turning human productivity into something tangible that can be exchanged for other things that human productivity is producing / servicing. Once you wipe out human productivity, how do big businesses generate wealth from the masses? If you have massive unemployment in the future, and only a select few with jobs that are mostly maintaining AI, how do you squeeze money out of the commonfolk when they aren't making any money? UBI still has to come from somewhere, and needs some sort of taxation to work.
Once you have a massive amount of people not working, I feel like that's when society is going to collapse. People need a purpose in life to motivate them. Even if it's something simple like data entry, it at least creates tangible interaction with other people, gives the person a reason to wake up and do things. Once you have a ton of free time on your hands, do people consume even more slop / conspiracy stuff and push them off the edge? I see lots and lots of mass shootings and killings in our future if that's the case, and mental illnesses of society in general will be on a level we've never seen before.
At some point, the government needs to start stepping in and limiting this, or society as a whole is destined to fail IMO.
Last edited by The Yen Man; 11-13-2025 at 11:48 AM.
|
|
|
|
The Following 2 Users Say Thank You to The Yen Man For This Useful Post:
|
|
11-13-2025, 12:00 PM
|
#25
|
|
Powerplay Quarterback
Join Date: Jan 2021
Location: On the cusp
|
My kid consults, and all of her projects are now AI-related. She is on one where the client wants to improve their HR process. Currently, a human looks at EVERY. SINGLE. APPLICATION. This is mind-numbingly stupid. In the user group meetings, setting what is required, people go very quiet, very fast when they realize most of their day would be replaced by the tech. I won't say what industry but it is probably the last one you would guess. Meaning it has encroached into spaces people may think it never goes.
I seriously dispute the 40% error rate. In my experience, it has been closer to 5 to maybe 10%. And it doesn't matter anyway because it is not going to make the same mistake, and the iterations are instantaneous. It's improvement over a day will be better than a person in a year (making up numbers to make the point). Looking at Bluejays scenario, I think he is bang on but put it at months, not years. The logjam will be the implementation and acceptance of it. Intake, triage, history taking (accompanied but the patient's entire medical record) diagnoses and some treatment can all be handled by a machine, once the error rate drops to an acceptable level. And the acceptable level will be achievable because the machine can run 10s of millions of training runs to identify areas of mistake and find fixes. I know in my field there can be problems with hallucinations. It can make up entire case law. Well, that is simple enough to cross-check with public records to eliminate it. It just hasn't been done yet, but I bet/assume that the legal-focused apps will have a feature like that, plus it can retrieve the case, summarize the salient points with specific page references, explain how it applies to the matter at hand, and prepare a table of authorities. Instantly. I can't imagine how scared solicitors could/should be. Or they can escape the drudgery of drafting and focus on face-to-face negotiations, which are more interesting and more challenging for AI.
Anecdotally, my kid is looking for a car. Facebook marketplace search function sucks super big balls. So I scripted a Python thingy to scrape MP for the parameters I am looking for, put it in a spreadsheet so I can search the sheet for the interesting cars. It also suggested a weighted ranking based on year, kms, and price.
I also was pissed that I seemed to be getting lower internet speeds when I did a speedcheck. I looked for a program to run a check every x minutes and log the results. Couldn't really find what I was looking for. So I asked CGPT. 30 minutes later I had a script to run a speed check on my isp every 30 minutes, track the results in a spreadsheet and create a graph of the results. I have opened my terminal window 3 times in 56 years and now I am 'coding'? WTF. You can't tell me this is world-changing.
Change is constant. Think of when Henry Ford figured out how to get a car in every 'garage'. I forget the numbers but once the first car rolled out there were 100,000 horses and 1 car. Within 5 years there were 100 horses and 100,000 cars. This is what we are on the verge of. Think of all the horse related industries that essentially disappeared. Also, think of the need for mechanics. And the need for parts for those mechanics. Then the tools for DIYers to fix their cars. Gas stations, tire manufacturers, distribution, sales, installation, and repair. It gets quite mind-bottling quite fast. I think we are on the precipice of this new world. What we don't know is what industries the AI will create that we haven't even thought of? Maybe none. Maybe an entirely different way of living. Who knows? I want to be optimistic but totally understand the concern. If half the people in Calgary don't have a job, the value of my house will plummet. My stocks will likely plummet as well. Could get dystopian. I guess we'll see.
__________________
E=NG
Last edited by Titan2; 11-13-2025 at 12:05 PM.
|
|
|
|
The Following 3 Users Say Thank You to Titan2 For This Useful Post:
|
|
11-13-2025, 12:01 PM
|
#26
|
|
First Line Centre
Join Date: Feb 2002
Location: Normally, my desk
|
Quote:
Originally Posted by topfiverecords
5 years from now is probably 2.5 years from now. 5 years from now, you'll do all of that through an app and the AI will provide diagnosis (with outsourced MD "verification"). Blood work and similar in person physical tests will be all you'll see a person for, that is until you go to a faceless clinic, scan a QR code and stick your arm in a robot.
|
I can see this going by the way side as well. Real time monitoring of blood sugar levels for diabetics already exists. Next step is at home blood tests as often as a person wants. I didn't google it, but it might already be here.
|
|
|
11-13-2025, 12:02 PM
|
#27
|
|
Franchise Player
|
Quote:
Originally Posted by The Yen Man
What I can't wrap my head around is how this will actually work to the benefit of society as a whole in the future.
Money to me, is ultimately turning human productivity into something tangible that can be exchanged for other things that human productivity is producing / servicing. Once you wipe out human productivity, how do big businesses generate wealth from the masses? If you have massive unemployment in the future, and only a select few with jobs that are mostly maintaining AI, how do you squeeze money out of the commonfolk when they aren't making any money? UBI still has to come from somewhere, and needs some sort of taxation to work.
Once you have a massive amount of people not working, I feel like that's when society is going to collapse. People need a purpose in life to motivate them. Even if it's something simple like data entry, it at least creates tangible interaction with other people, gives the person a reason to wake up and do things. Once you have a ton of free time on your hands, do people consume even more slop / conspiracy stuff and push them off the edge? I see lots and lots of mass shootings and killings in our future if that's the case, and mental illnesses of society in general will be on a level we've never seen before.
At some point, the government needs to start stepping in and limiting this, or society as a whole is destined to fail IMO.
|
Severance!
|
|
|
|
The Following User Says Thank You to malcolmk14 For This Useful Post:
|
|
11-13-2025, 12:06 PM
|
#28
|
|
In Your MCP
Join Date: Apr 2004
Location: Watching Hot Dog Hans
|
Quote:
Originally Posted by topfiverecords
5 years from now is probably 2.5 years from now. 5 years from now, you'll do all of that through an app and the AI will provide diagnosis (with outsourced MD "verification"). Blood work and similar in person physical tests will be all you'll see a person for, that is until you go to a faceless clinic, scan a QR code and stick your arm in a robot.
|
My G/F works in healthcare at the management level, and although it's INCREDIBLY useful in some areas, there's a very large confidentiality component they don't quite know how to solve. Sure it's great for all the things you mentioned (it's incredibly useful for creating business cases based on stats, creating bulletins, etc) but they're not so keen on entering patient data to assist in diagnosis because of identity theft.
I work more in the engineering side of projects up in the oilsands and it's cut my workday in half, if not more. It's scary how accurate (and fast) it is for project costing, quoting, creating work scopes and projecting timelines. It still needs someone to check its outputs but that takes a fraction of the time it would take to actually write them. I can review financials in seconds and create budgets and KPI's in a matter of minutes vs weeks.
Our entire lives are going to be run by AI in the future. It's scary.
|
|
|
11-13-2025, 12:07 PM
|
#29
|
|
Powerplay Quarterback
Join Date: Jan 2021
Location: On the cusp
|
Quote:
Originally Posted by Leeman4Gilmour
I can see this going by the way side as well. Real time monitoring of blood sugar levels for diabetics already exists. Next step is at home blood tests as often as a person wants. I didn't google it, but it might already be here.
|
I already do this with my Contour. A little prick on my finger and the machine spits out my blood sugar. Connect that machine to the net and AI could runs hundreds of tests, I assume.
__________________
E=NG
|
|
|
11-13-2025, 12:08 PM
|
#30
|
|
The new goggles also do nothing.
Join Date: Oct 2001
Location: Calgary
|
Real time monitors aren't super accurate, generally what the real time monitor calculates my A1C to be and what it actually is is rarely the same or even close.
I mean if I sleep on my arm with the sensor it generally is really off for the night.
There's been times when it's telling me I'm about to die from low blood sugar so I quickly eat something, but it crashes back down.. finally took a real reading with a reader and a strip with blood and nope the sensor is just reading 3.0 instead of 6.5...
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
|
|
|
11-13-2025, 12:10 PM
|
#31
|
|
Powerplay Quarterback
Join Date: Jan 2021
Location: On the cusp
|
Quote:
Originally Posted by Tron_fdc
My G/F works in healthcare at the management level, and although it's INCREDIBLY useful in some areas, there's a very large confidentiality component they don't quite know how to solve. Sure it's great for all the things you mentioned (it's incredibly useful for creating business cases based on stats, creating bulletins, etc) but they're not so keen on entering patient data to assist in diagnosis because of identity theft.
I work more in the engineering side of projects up in the oilsands and it's cut my workday in half, if not more. It's scary how accurate (and fast) it is for project costing, quoting, creating work scopes and projecting timelines. It still needs someone to check its outputs but that takes a fraction of the time it would take to actually write them. I can review financials in seconds and create budgets and KPI's in a matter of minutes vs weeks.
Our entire lives are going to be run by AI in the future. It's scary.
|
I totally agree, but I think the privacy issue is a red herring created by humans as a reason to say no. It would be simple to assign a code to the person, encrypt the records, utilize blockchain for verification, etc. That industry is just scared to do it, perhaps for good reason, but banking and crypto have already solved this problem, realistically.
__________________
E=NG
|
|
|
|
The Following 2 Users Say Thank You to Titan2 For This Useful Post:
|
|
11-13-2025, 01:17 PM
|
#32
|
|
Franchise Player
|
Classic hype cycle. People projecting it will do everything (peak-hype), then finding out that it can't, then steady adoption as practical uses are established.
We are definitely at peak hype, but I think the trough will less pronounced because there are already many good use cases. The steady adoption has already started IMO, taking incremental steps as an assistant/force multiplier/tool. It's not a silver bullet, more like a train on a slope or a snowball, gradually picking up steam until it really can do most things well.
|
|
|
|
The Following User Says Thank You to edslunch For This Useful Post:
|
|
11-13-2025, 01:27 PM
|
#33
|
|
First Line Centre
|
Quote:
Originally Posted by edslunch
Classic hype cycle. People projecting it will do everything (peak-hype), then finding out that it can't, then steady adoption as practical uses are established.
We are definitely at peak hype, but I think the trough will less pronounced because there are already many good use cases. The steady adoption has already started IMO, taking incremental steps as an assistant/force multiplier/tool. It's not a silver bullet, more like a train on a slope or a snowball, gradually picking up steam until it really can do most things well.
|
I think similar things were said about mobile phones and the internet and they have become almost a utility. It's possible AI follows that curve but I don't think it's a foregone conclusion.
|
|
|
11-13-2025, 01:29 PM
|
#34
|
|
First Line Centre
Join Date: Feb 2002
Location: Normally, my desk
|
Quote:
Originally Posted by edslunch
Classic hype cycle. People projecting it will do everything (peak-hype), then finding out that it can't, then steady adoption as practical uses are established.
We are definitely at peak hype, but I think the trough will less pronounced because there are already many good use cases. The steady adoption has already started IMO, taking incremental steps as an assistant/force multiplier/tool. It's not a silver bullet, more like a train on a slope or a snowball, gradually picking up steam until it really can do most things well.
|
I asked Gemini. It agrees with you, but suggests we are already sliding into the trough of disillusionment. It also suggests the improvements are so rapid, AI is building a ladder out of the trough at the same time as we are entering it. Agreeing with your belief that the trough will be less pronounced.
|
|
|
|
The Following User Says Thank You to Leeman4Gilmour For This Useful Post:
|
|
11-13-2025, 01:32 PM
|
#35
|
|
Appealing my suspension
Join Date: Sep 2002
Location: Just outside Enemy Lines
|
It's not wiping my arse and making my sandwich yet.
I'm in the field where it can completely replace me. That said, it's someone with my experience who will have to make it do that. In fact I'm working on a massive AI project right now...so in a way I'm basically killing myself off.
This all said...we have a massive population bust coming where we will not have the people to provide all the required services. So AI is sort of essential in doing some tasks that people currently do so people that we do have can do the more essential stuff.
My worry is that it's going to create massive wealth chasm's and there will be no way for Governments to get the revenue from it that they need to keep funding things and god forbid implement some type of UBI. So it's probably going to cause some type of Leninist revolution where millions of schmucks like me armed with our crappy guns, pitchforks and Knifes need to go after Peter Thiels private security in hopes of stringing him up and taking all his money and destroying his machines. Should be fun!
__________________
"Some guys like old balls"
Patriots QB Tom Brady
|
|
|
11-13-2025, 01:37 PM
|
#36
|
|
Franchise Player
|
Quote:
Originally Posted by AFireInside
I do love that people who use AI say things like "learn how to prompt really quickly" as if that's going to save their job. That's not a difficult skill to learn, and if that's what you're being paid for you're going take a massive pay cut, or be replaced.
Overall AI is going to be very bad for humanity, there will be some positives for sure, but I don't see how it will be a good thing in the grand scheme of things.
|
The single biggest failure point in software (and other) projects from the beginning of time hasn't been coding quality, it's been requirements management. A.K.A. prompt engineering with AI. While I agree just saying the average person can save their job by learning to prompt better is naive, the ability to understand business needs, turn them into solid requirements (with the help of AI!), and turn them into prompts (with the help of AI) so that AI can generate the right stuff will be a hugely valuable skill for a smallish number of people.
|
|
|
|
The Following User Says Thank You to edslunch For This Useful Post:
|
|
11-13-2025, 01:38 PM
|
#37
|
|
Franchise Player
|
Quote:
Originally Posted by Leeman4Gilmour
I asked Gemini. It agrees with you, but suggests we are already sliding into the trough of disillusionment. It also suggests the improvements are so rapid, AI is building a ladder out of the trough at the same time as we are entering it. Agreeing with your belief that the trough will be less pronounced.
|
It's like all of those phases are happening simultaneously
|
|
|
11-13-2025, 01:42 PM
|
#38
|
|
Franchise Player
Join Date: Mar 2015
Location: Pickle Jar Lake
|
Quote:
Originally Posted by bluejays
Sure, and I'm sure you're right. But it's getting better at such a fast pace. There will be hiccups along the way but it's here.
How it starts and how it progresses is interesting. Let's use the example of a doctors office with 10 doctors.
Today - let's use it for listening to the patient/doctor conversation and let it spit out a summary for charting the conversation and next steps. Saves an hour per day per doctor (10 hours saved per day, 50 hours saved per week). With that hour saved, we don't need 10 doctors, instead we just need 9. No need to backfill when a doctor leaves.
2 years from now - let's introduce an automated triage to patients waiting in the waiting room to describe their symptoms, and have the AI give an initial diagnosis/next steps recommendation (to the doctor only). Doctor sees patient with information in hand and potential diagnosis already lined up. Saves 30 minutes more per day.
5 years from now - let's combine both the triaging and in-room conversation to the point where the doctor signs off on the AI diagnosis/next steps. Saves 4 hours a day per doctor. We only now need 4 doctors, and no student to take an initial information.
Anyway, that's just my rough take on examples how these things progress. To some it's great in the sense that you'll get your diagnosis faster and more efficiently, but for society as a whole there are so jobs not needed that the disparity climbs between the haves and have nots. That's my problem with this. A lot of people will eventually be out of jobs.
|
I'm not sure I can agree with those assumptions.
Today - we are short doctors, any extra time should be used to see more patients.
2 years - I don't see that happening. You are essentially turning over diagnosis to an AI with a high error rate. The human temptation here is to just grab it, agree, and move on. The big problem with that is the medical field is full of outliers, and you need to be able to recognize those for what they are. Sometimes it's subtle. It can be revealed through the patient interview process, but doctors will be far more likely to sign off on the obvious provided answer and will miss the real diagnosis. Maybe chatting with an AI can get to the same point, but I'm skeptical.
5 years - the trap off assuming rates of improvement are constantly increasing until perfection. But that's not how it works. It's a S curve, and it's just about finding where that limit is. From existing progress, and examples like Tesla FSD and vast amounts of money and processing thrown at it, it's becoming more and more obvious we have flattened out on GPT's.
|
|
|
11-13-2025, 01:42 PM
|
#39
|
|
Franchise Player
|
Quote:
Originally Posted by Sylvanfan
It's not wiping my arse and making my sandwich yet.
I'm in the field where it can completely replace me. That said, it's someone with my experience who will have to make it do that. In fact I'm working on a massive AI project right now...so in a way I'm basically killing myself off.
This all said...we have a massive population bust coming where we will not have the people to provide all the required services. So AI is sort of essential in doing some tasks that people currently do so people that we do have can do the more essential stuff.
My worry is that it's going to create massive wealth chasm's and there will be no way for Governments to get the revenue from it that they need to keep funding things and god forbid implement some type of UBI. So it's probably going to cause some type of Leninist revolution where millions of schmucks like me armed with our crappy guns, pitchforks and Knifes need to go after Peter Thiels private security in hopes of stringing him up and taking all his money and destroying his machines. Should be fun!
|
That's my fear. The people most likely to be downsized by AI are professionals and knowledge workers - some of the most highly paid workers. The jobs that are harder to replace generally involve physical activity, human interaction, creativity....which are typically not highly valued or paid in our society or are paid by society via taxes. All these companies adopting AI to reduce their workforces are ultimately killing off their customers.
|
|
|
11-13-2025, 01:52 PM
|
#40
|
|
Franchise Player
Join Date: Mar 2015
Location: Pickle Jar Lake
|
Quote:
Originally Posted by Titan2
My kid consults, and all of her projects are now AI-related. She is on one where the client wants to improve their HR process. Currently, a human looks at EVERY. SINGLE. APPLICATION. This is mind-numbingly stupid. In the user group meetings, setting what is required, people go very quiet, very fast when they realize most of their day would be replaced by the tech. I won't say what industry but it is probably the last one you would guess. Meaning it has encroached into spaces people may think it never goes.
I seriously dispute the 40% error rate. In my experience, it has been closer to 5 to maybe 10%. And it doesn't matter anyway because it is not going to make the same mistake, and the iterations are instantaneous. It's improvement over a day will be better than a person in a year (making up numbers to make the point). Looking at Bluejays scenario, I think he is bang on but put it at months, not years. The logjam will be the implementation and acceptance of it. Intake, triage, history taking (accompanied but the patient's entire medical record) diagnoses and some treatment can all be handled by a machine, once the error rate drops to an acceptable level. And the acceptable level will be achievable because the machine can run 10s of millions of training runs to identify areas of mistake and find fixes. I know in my field there can be problems with hallucinations. It can make up entire case law. Well, that is simple enough to cross-check with public records to eliminate it. It just hasn't been done yet, but I bet/assume that the legal-focused apps will have a feature like that, plus it can retrieve the case, summarize the salient points with specific page references, explain how it applies to the matter at hand, and prepare a table of authorities. Instantly. I can't imagine how scared solicitors could/should be. Or they can escape the drudgery of drafting and focus on face-to-face negotiations, which are more interesting and more challenging for AI.
Anecdotally, my kid is looking for a car. Facebook marketplace search function sucks super big balls. So I scripted a Python thingy to scrape MP for the parameters I am looking for, put it in a spreadsheet so I can search the sheet for the interesting cars. It also suggested a weighted ranking based on year, kms, and price.
I also was pissed that I seemed to be getting lower internet speeds when I did a speedcheck. I looked for a program to run a check every x minutes and log the results. Couldn't really find what I was looking for. So I asked CGPT. 30 minutes later I had a script to run a speed check on my isp every 30 minutes, track the results in a spreadsheet and create a graph of the results. I have opened my terminal window 3 times in 56 years and now I am 'coding'? WTF. You can't tell me this is world-changing.
Change is constant. Think of when Henry Ford figured out how to get a car in every 'garage'. I forget the numbers but once the first car rolled out there were 100,000 horses and 1 car. Within 5 years there were 100 horses and 100,000 cars. This is what we are on the verge of. Think of all the horse related industries that essentially disappeared. Also, think of the need for mechanics. And the need for parts for those mechanics. Then the tools for DIYers to fix their cars. Gas stations, tire manufacturers, distribution, sales, installation, and repair. It gets quite mind-bottling quite fast. I think we are on the precipice of this new world. What we don't know is what industries the AI will create that we haven't even thought of? Maybe none. Maybe an entirely different way of living. Who knows? I want to be optimistic but totally understand the concern. If half the people in Calgary don't have a job, the value of my house will plummet. My stocks will likely plummet as well. Could get dystopian. I guess we'll see.
|
Read this then:
https://optimusai.ai/why-40-of-llm-o...liable-llmops/
And yes, it will confidently make the same mistake over and over again until it's re-trained. You may find it works well for your use case, and I don't dispute that. But there are a lot of use cases it doesn't. Like spatial awareness.
Here's a recent search from Reddit.
Quote:
I asked for a map of the US showing a circle with a radius of 900 miles with Denver at the center. Did they just move Denver to Kansas or did the mountains move to Kansas too?
|
https://www.reddit.com/r/ChatGPT/com...a_circle_with/

The world famous New Brumerd Island.
You want this diagnosing which organ your cancer is in? ChatGPT said your heart was full of cancer and it had to come out. Turns out it was your kidney, so you have about 3 more seconds to live. Sorry, it'll do better next time.
|
|
|
| Thread Tools |
Search this Thread |
|
|
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT -6. The time now is 02:14 PM.
|
|