05-13-2025, 10:52 AM
|
#681
|
Franchise Player
|
Quote:
Originally Posted by TorqueDog
You pretty much got the answer I was going to give you from everyone else, but here's mine: If it makes my work or life easier somehow, then that's worth it to me.
No, I likely won't be selling "ChartBot" to Bloomberg or Morgan Stanley and retiring on a beach in my 40s, but it takes a manual task that I want to accomplish -- a task that would take a human a good amount of time and effort -- and completes it in under a minute. And the work it took to make it a reality took a fraction of the time it would have taken to write and debug manually, too. For most people, the value of AI is going to be taking menial or labour intensive tasks off their plate to do more useful / enjoyable things.
For much bigger projects, well, in theory, that's where something like Autogen / Autogen Studio comes in. Blog post is here, it's friggin' sweet.
|
I wanted to create a dashboard using python to chart some parameters from my fitbit. Pretty simple app, but why learn the api and spend time debugging and coding? A few round trips to ChatGPT and I have what I want with zero lines of manual code.
|
|
|
The Following 3 Users Say Thank You to edslunch For This Useful Post:
|
|
05-13-2025, 11:00 AM
|
#682
|
Franchise Player
Join Date: Aug 2005
Location: Memento Mori
|
Quote:
Originally Posted by Russic
Is there a particular hurdle that would keep that exact thing from happening?
|
No, other than that pesky thing called making money.
__________________
If you don't pass this sig to ten of your friends, you will become an Oilers fan.
|
|
|
05-13-2025, 03:35 PM
|
#683
|
Franchise Player
Join Date: Mar 2015
Location: Pickle Jar Lake
|
Quote:
Originally Posted by edslunch
I wanted to create a dashboard using python to chart some parameters from my fitbit. Pretty simple app, but why learn the api and spend time debugging and coding? A few round trips to ChatGPT and I have what I want with zero lines of manual code.
|
Ya, I have to dabble in code here and there for work. Usually I search the internet and modify what I find for my use. I couldn't find anything that was working for my data, so gave the GPT a shot. It took a few rounds and some edits of my own, but the "starter pack" of what I got was easier than Googling. It also created a more efficient process that decimated the processing time, which is kinda nice working with GB scale text files.
|
|
|
05-13-2025, 04:36 PM
|
#684
|
It's not easy being green!
Join Date: Oct 2001
Location: In the tubes to Vancouver Island
|
Anthropic's lawyer is being accused of making up source information in a legal filing in their own copyright case:
https://www.reuters.com/legal/litiga...source=bluesky
Quote:
A lawyer representing Universal Music Group (UMG.AS), opens new tab, Concord and ABKCO in a lawsuit over Anthropic's alleged misuse of their lyrics to train its chatbot Claude told U.S. Magistrate Judge Susan van Keulen at a hearing that an Anthropic data scientist cited a nonexistent academic article to bolster the company's argument in a dispute over evidence.
He rejected the music companies' request to immediately question the expert but said the allegation presented "a very serious and grave issue," and that there was "a world of difference between a missed citation and a hallucination generated by AI."
|
__________________
Who is in charge of this product and why haven't they been fired yet?
|
|
|
05-13-2025, 06:43 PM
|
#685
|
#1 Goaltender
|
Wrong thread*
Last edited by Firebot; 05-13-2025 at 06:45 PM.
|
|
|
05-14-2025, 06:46 AM
|
#686
|
Franchise Player
Join Date: Feb 2013
Location: Boca Raton, FL
|
LOL, as a professor I'm experiencing a lot of schadenfreude reading this article, both in the student anger, and the ineptitude of the professors in using AI, but saying how great it is. Jesus Christ. Everyone seems to think about these things at the most superficial level. It's maddening.
https://www.nytimes.com/2025/05/14/t...rofessors.html
__________________
"You know, that's kinda why I came here, to show that I don't suck that much" ~ Devin Cooley, Professional Goaltender
|
|
|
05-14-2025, 10:45 AM
|
#687
|
Dances with Wolves
Join Date: Jun 2006
Location: Section 304
|
Quote:
Originally Posted by Cali Panthers Fan
LOL, as a professor I'm experiencing a lot of schadenfreude reading this article, both in the student anger, and the ineptitude of the professors in using AI, but saying how great it is. Jesus Christ. Everyone seems to think about these things at the most superficial level. It's maddening.
https://www.nytimes.com/2025/05/14/t...rofessors.html
|
I fully expect the creative spaces that rage against AI to have the same issues the professors are having. We can be mad and fearful all we want, but the second it boils a multi-hour task down to seconds, things get complicated in a different sort of way.
|
|
|
05-14-2025, 10:53 AM
|
#688
|
It's not easy being green!
Join Date: Oct 2001
Location: In the tubes to Vancouver Island
|
Quote:
Originally Posted by Russic
I fully expect the creative spaces that rage against AI to have the same issues the professors are having. We can be mad and fearful all we want, but the second it boils a multi-hour task down to seconds, things get complicated in a different sort of way.
|
I think one of my main issues with this is that there is so much learning and understanding that comes from the struggle. We're going to end up with a generation of people who are completely unable to think or critically understand WHY some things are the way they are.
Applications of LLMs to automate tasks that have known boundaries, is one thing. But usage in context of learning is just foolish in my mind.
__________________
Who is in charge of this product and why haven't they been fired yet?
|
|
|
The Following 5 Users Say Thank You to kermitology For This Useful Post:
|
|
05-14-2025, 03:35 PM
|
#689
|
It's not easy being green!
Join Date: Oct 2001
Location: In the tubes to Vancouver Island
|
https://www.wired.com/story/grok-whi...ide-elon-musk/
Quote:
A chatbot developed by Elon Musk’s multibillion-dollar artificial intelligence startup xAI appeared to be suffering from a glitch Wednesday when it repeatedly brought up white genocide in South Africa in response to user queries about unrelated topics on X. Grok, which competes with other chatbots like OpenAI’s ChatGPT, is directly integrated into the social media platform that Musk also owns.
Numerous examples of the phenomenon could be found by searching the official Grok profile for posts containing the term “boer,” a word used to refer to people from South Africa of “Dutch, German, or Huguenot descent.” It is sometimes used by Black South Africans as a pejorative against white Afrikaners, or people associated with the apartheid regime. In response to topics ranging from streaming platform HBO Max’s name change to Medicaid cuts proposed by US lawmakers, the chatbot often seemed to initially stay on topic before veering back to white genocide in South Africa, completely unprompted.
|
The accuracy issues aside, this is going to be common in the future. We have no idea what pressure the AI provider is applying to the algorithm.
__________________
Who is in charge of this product and why haven't they been fired yet?
|
|
|
05-15-2025, 09:15 AM
|
#690
|
Dances with Wolves
Join Date: Jun 2006
Location: Section 304
|
Quote:
Originally Posted by kermitology
I think one of my main issues with this is that there is so much learning and understanding that comes from the struggle. We're going to end up with a generation of people who are completely unable to think or critically understand WHY some things are the way they are.
Applications of LLMs to automate tasks that have known boundaries, is one thing. But usage in context of learning is just foolish in my mind.
|
I think this is a great point. I went down a rabbit hole a few years ago around learning, and the general consensus that I was missing is that tension is a critical part of the process. It's fine to say "I'm a visual/auditory/tactile learner" but what you're usually saying is how you prefer to learn. That's not always the best (and was a common trap I was falling into).
On the other side of the coin, there are studies out of Africa that are showing traditionally left-behind groups (girls, in this case) experienced rapid academic catch-up when paired with an AI tutor.
So it's hard to say "traditional learning is better" when traditional learning fails so many. For someone who doesn't fit the mould of school, this could be a Godsend for them, not to mention teachers who are having to balance class sizes that no government seems able to bend back.
So far the only prediction that's made sense to me so far took a more balanced approach: certain types of people are going to be completely abandoned by a world that moves too fast for them, while other types of people will become almost super human in their ability to execute.
|
|
|
05-15-2025, 10:38 AM
|
#691
|
It's not easy being green!
Join Date: Oct 2001
Location: In the tubes to Vancouver Island
|
So, I'm going to re-iterate, my issue here is that AI tutors shouldn't be trusted, because there is no qualifier that the information provided by the model is correct. It's way to easy to generate incorrect information, but we're seeing more and more evidence of people implicitly trusting AI when they ask questions, search for information, and providing summaries. It's a shortcut that is skipping the struggle that's required to properly consume information.
I loved a post I saw on Bluesky positioning this in Excel. If you have a formula in an Excel spreadsheet that is wrong 15% of the time, but you don't know which one.. why would you use that?
In regards to learning, I think about this quote from Richard Wagamese's book of meditations: Embers.
Quote:
ME: You always repeat things three times.
OLD WOMAN: Just the important things.
ME: Why? I hear you the first time.
OLD WOMAN: No. You listen the first time. You hear the second time. And you feel the third time.
ME: I don’t get it.
OLD WOMAN: When you listen, you become aware. That’s for your head. When you hear, you awaken. That’s for your heart. When you feel, it becomes a part of you. That’s for your spirit. Three times. It’s so you learn to listen with your whole being. That’s how you learn.
|
__________________
Who is in charge of this product and why haven't they been fired yet?
Last edited by kermitology; 05-15-2025 at 10:42 AM.
|
|
|
The Following 2 Users Say Thank You to kermitology For This Useful Post:
|
|
05-15-2025, 11:44 AM
|
#692
|
#1 Goaltender
|
Quote:
Originally Posted by kermitology
So, I'm going to re-iterate, my issue here is that AI tutors shouldn't be trusted, because there is no qualifier that the information provided by the model is correct. It's way to easy to generate incorrect information, but we're seeing more and more evidence of people implicitly trusting AI when they ask questions, search for information, and providing summaries. It's a shortcut that is skipping the struggle that's required to properly consume information.
|
Complete falsehood. AI is reliable.
Here is an example that chatgpt foretold an affair based off coffee grounds and saved this poor woman.
https://www.techspot.com/news/107925...e-grounds.html
If you can't trust an AI language model interpreting tasseography what can you trust?
|
|
|
The Following User Says Thank You to Firebot For This Useful Post:
|
|
05-15-2025, 02:47 PM
|
#693
|
Dances with Wolves
Join Date: Jun 2006
Location: Section 304
|
Quote:
Originally Posted by kermitology
So, I'm going to re-iterate, my issue here is that AI tutors shouldn't be trusted, because there is no qualifier that the information provided by the model is correct. It's way to easy to generate incorrect information, but we're seeing more and more evidence of people implicitly trusting AI when they ask questions, search for information, and providing summaries. It's a shortcut that is skipping the struggle that's required to properly consume information.
I loved a post I saw on Bluesky positioning this in Excel. If you have a formula in an Excel spreadsheet that is wrong 15% of the time, but you don't know which one.. why would you use that?
|
I suppose my issue with the argument is threefold:
1. It doesn't hold up over time
Maybe this was true of models a year ago (even then... seems high), but the models available to us today cut this number way down. Aided by a competent person (like a teacher alongside the helper AI), issues drop well below problematic levels.
2. Humans are likely far more error prone (and they too rarely accept or recognize when they're wrong)
I run absolutely every single thing I do for a client through an LLM, and the number of things it catches is wild. Now, you could say "it sounds like you're bad at what you do," and maybe! But also, most people suck dreadfully at what they do.
The stuff it's caught on my behalf is nothing short of amazing. Contract issues that were present for almost a decade, misinterpreted client instruction, autocorrected words that were undetectable by spellcheckers...
3. People are already using it to obvious benefit
My friends who are using it properly (very important) are absolutely lapping their co-workers. My doctor friend is correctly adjusting diagnoses of her patients based on history she missed. My kids' test scores have seen an uptick since using it to create complimentary study guides.
We can debate the theoretical problems (of which, several keep me up at night), but we should probably also acknowledge that those who are using these tools well are beginning to pull away.
|
|
|
05-16-2025, 08:30 AM
|
#695
|
Dances with Wolves
Join Date: Jun 2006
Location: Section 304
|
Quote:
Originally Posted by Qwerty
I was at the doctor yesterday because when I was in the states I came in contact with a bunch of Poison Oak. Needless to say my legs are pretty infected so I wanted to see if there was anything they could give me to help.
What does this have to do with AI? Just wait.
The first doctor comes in and looks at it and has no clue what to do, I told him I used dish soap as the internet told me to stop it from spreading. I got the WTF look...
He goes and gets a second doctor, he comes in and is like "What is Poison Oak". Same story to him and he tells me "Lets see what AI says". He pulls out his phone and types in Poison Oak Treatment, literally showing me on his phone and reads it outload to the 3 of us. He laughed and was like, apparently Dish Soap does work... Then prescribed some steroid cream that the internet recommended because he had no clue what this was.
The fact he felt comfortable admitting he had no clue how to treat me and used AI (aka the internet) right in front of me was hilarious. Needless to say, my legs are doing much better today.
Now, what if I could of just taken a picture and entered some diagnosis details into an AI Doctor and it could of spit out the same information for an over the counter medication and saved me a trip to the doctor. Maybe then it gets delivered by drone to my house in 5 minutes.... To think what kind of world our kids will live in.
|
This is kinda what I mean when I say we overhype humans. Anybody over the age of 40 that's visited a doctor for any type of ailment that's even 2% strange can probably get on board with me here... they aren't that great.
Drone aside, you could absolutely do all of the above today. The last 3 visits to our doctor (1 kid, 1 me, and 1 for my wife), I've used ChatGPT going in just for fun, and in 2 instances the doctor said exactly what the LLM did, and in the final one, they were far worse and I thought my wife was going to flip a cop car.
I'm certainly not saying don't go to the doctor, but I will predict that AI integrated into the health system is going to be much better than what we're rolling with today.
|
|
|
05-17-2025, 07:49 AM
|
#696
|
#1 Goaltender
|
Quote:
Originally Posted by kermitology
I think one of my main issues with this is that there is so much learning and understanding that comes from the struggle. We're going to end up with a generation of people who are completely unable to think or critically understand WHY some things are the way they are.
Applications of LLMs to automate tasks that have known boundaries, is one thing. But usage in context of learning is just foolish in my mind.
|
Well that's going to be the new end game. Information will go through a huge inflation and "knowing" things will mean very little in a few years. Understanding will be the new king.
Nothing wrong with a google on steroids, people just have to adapt.
|
|
|
05-22-2025, 10:17 AM
|
#697
|
Franchise Player
|
People are weird. AI will be great for menial things, amplifying things and bringing up the floor. It'll leave people to focus on higher level stuff. However, I think it'll hollow out the lower tier stuff, so training at lower tier tasts to reach the higher level tasks is going to be a challenge... but I'm sure we'll figure it out. I think the biggest change will be teaching people to ask AI, "Why?" and understand each step vs being happy with just the answer. I also think that this phrase will be more important as time goes on, "A question well posed is a question half answered."
Robo AI for dumb stuff like basic doctors visits, basic prescriptions, hell even as basic double check in general will be good. Robo nannies will also be great. Robo chefs as well. I view it no differently than tech to enhance the lives of older people. Lights on/off to more enhanced and accurate request fulfilment etc. Think something like swiftkey that learns your swipe style to customize a keyboard for you... but daily routines etc. That's the basics of a future where everything we deal with is AI enhanced. Some will still hate it, most will like it... majority will be like now, addicted to consuming media, but given higher quality media.
I don't think people realize that technologically speaking, we aren't being given the highest tier of product to consume. We essentially are still mostly provided pixelated games for phones capable of playing movies.
This morning I was thinking... what if there was a mashup where the Joker and the thief intro was used for the intro to Muse's uprising... One day it'll be easily accessible one day if I want it to be.
A future where AI turns us into the Wall-e or Matrix time line could happen, but only if we let it, and honestly speaking, I'm sure it'd take a while to reach that point. There's still too much in terms of resource scarcity for what AI would need to take over in terms of power, physical barriers etc.
What I like the most about AI is more appropriately identifying the question. It's good at all necessary details for situations where people keep thinking their situation is simple, but not. Honestly speaking, many people's definition of simple vs complex is purely based on their intentions of putting in effort, not their actual situation. Lately, I've been solving certain arguments with the wife and kids by telling them to stop asking me but to ask AI and ask AI why it is correct.
"What is the answer to this elementary school math question?"
"Cake is X number, cookies is Y number."
"Too fast. How do you know you're right?"
"Sigh... I don't have time or energy for this. Just ask ChatGPT to explain step by step how to solve..."
"Pfft. Computers can't do that."
"Will you just try before making those claims?"
|
|
|
05-22-2025, 10:38 AM
|
#698
|
Franchise Player
Join Date: Mar 2015
Location: Pickle Jar Lake
|
Unfortunately currently the "AI" will confidently explain it, but be completely wrong. So your kid might get convinced of something, and end up looking a bit silly. It took me about 5 rounds before I realized it didn't understand that Victoria BC was west of Calgary, and was doing pretzel bending logic to try to make it's solution work with that fact wrong. Once it drew an ascii diagram of what it was doing, I finally understood why it couldn't get it right. And ya, I was using the airport code, so it wasn't the "wrong" Victoria.
I think confidently sending people off to ChatGPT is not always going to go well.
|
|
|
05-22-2025, 11:00 AM
|
#699
|
Franchise Player
|
Quote:
Originally Posted by Fuzz
Unfortunately currently the "AI" will confidently explain it, but be completely wrong. So your kid might get convinced of something, and end up looking a bit silly. It took me about 5 rounds before I realized it didn't understand that Victoria BC was west of Calgary, and was doing pretzel bending logic to try to make it's solution work with that fact wrong. Once it drew an ascii diagram of what it was doing, I finally understood why it couldn't get it right. And ya, I was using the airport code, so it wasn't the "wrong" Victoria.
I think confidently sending people off to ChatGPT is not always going to go well.
|
I don't confidently sent people off to ChatGPT. I just tell them to go there first to get an explanation/waste time and let me know if the explanation seems weird to them/if they want to keep going or if they suddenly decide to drop it. I never say things are infallible. I deal with work where things are always prone to error because of a concept of GIGO. I could get the perfect answer to a question, but the original question/details were wrong. So by association the answer is correct for a different situation, but completely wrong for the real situation.
Something like algebra, I'm not going to explain the 6-8 steps on my day off. Especially if it's a dial tone in the question person's head. (ie: "You answered so fast. How do you know it's right?") If the person have some semblance of context and are filling in the gaps in their own head, I can work with it. But if their brain is not processing at all and they just want to argue... then they can argue with something else and not me (at least at first). This especially when someone decides to ask me to be the tie breaker in some other argument they're having with someone else. Don't drag me into this ####, go to ChatGPT and fight in that arena. That's what I mean.
It's honestly been a great tool for dealing with who want to do 2 vs 1 arguments, especially parents/in-laws. "Don't ask me. Ask ChatGPT and let's look at the answer it gives together." and "If that was the obviously incorrect answer (because both of you agreed the answer sucked), did you try making sure the question being asked is more accurate? Treat ChatGPT like it's dumb." I'm less likely sucked into a specific side for the most stupid of conflicts.
|
|
|
05-23-2025, 08:16 AM
|
#700
|
Franchise Player
Join Date: May 2004
Location: Helsinki, Finland
|
AI will likely pretty quickly make a lot of people completely dependent on products that are designed primarily to keep them as customers.
Even more broadly, AI is like most automation. It's great for doing things you already know how to do, but when you start using it to do things routinely, you will quite quickly forget how to actually do the thing yourself, or you will never learn in the first place.
Much like autotune has just absolutely decimated the number of people who can hit a note unassisted, LLM's will very rapidly absolutely destroy the number of people who can learn things from primary sources, make conclusions based on real world observation etc. It will decimate the number of people who can easily do lateral thinking and take alternating views on things, because they will grow up with machines that will always provide a quick and easy commonly believed/accepted answer to any question.
Basically, it will destroy the number of people able to think for themselves, and it will very likely decimate the number of people who have the patience to try and learn something genuinely new because that's slow and hard and there just isn't anyone you can ask about it.
Most likely it will further damage the idea that it's extremely important to make sure something is exactly right and not just roughly right.
|
|
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT -6. The time now is 05:32 AM.
|
|