Quote:
Originally Posted by Russic
There is quite a bit of chatter online about why anything should be built or created since it won't become a big success. My philosophy on creating anything (apps, music, writing, etc) is that doing it for success is a trap. As cliche as it may sound, the only thing that matters is the journey of creation, and to me this is pretty much the perfect outcome.
He built a cool thing that he otherwise couldn't have, it took him about as much time as it took most of my former clients to write 8 pages of web copy, and he had fun doing it. That's enormously fulfilling, and I think it ties into somewhere deep within humans to create. I remain unconvinced there's much of a point to all of this beyond that.
As for the doom and gloom, I oscillate between incredibly excited and totally petrified. The general tone in the community does appear to be shifting towards a very fast takeoff. There are some people at the helm of the big models that have been fairly conservative and are shifting towards an attitude of concern over the speed this is all happening. Sora 2 was built in a couple weeks, Claude Cowork the same, the people making the models aren't even coding the upgrades (which are coming at an alarming rate). To me the argument that there's nothing here is getting weaker.
Perhaps it's all marketing fluff and these people are just riling up interest? That's my hope because I don't trust government to move quickly enough to adapt, but my concern is almost everything we hear from these people (love it or hate it) is coming true.
Robots are one-shotting tasks that they have no business being able to complete without years of training. Waymos are rolling out simultaneously to multiple cities at once. Agents have gone from working for a couple minutes 18 months ago, to multiple hours per day.
Indes should not have been able to build that site in 6 months. He's not a developer! It just blows my mind that at every turn there's so much evidence of where it seems we're headed yet so many people won't acknowledge it.
|
Well, my question was more curiosity relating to that was more along the lines of what motivations allowed him to keep pushing forward. It's easier to say in hindsight which is 20/20 but foresight is totally different. Basically, the reason why I am asking this is that I'm asking on behalf of my children into the mindset of inventing/innovation via AI. Sitting down with a 1000 piece puzzle with a specific goal is different than sitting down to make something with a potentially non-predetermined goal or outcome or even a shifting goal/outcome. The other aspect is wondering about the worries that whatever you plug in, could end up being "owned" by someone else if you're not careful.
I have a specific earworm remix of a song that I want to see manifested. I'm almost at the point where I think maybe I don't care that I gift that idea away, but I'm also not sure.
Me personally, I'm so used to working on projects with a tangible result within an idea of an expected time frame, that I can't understand the concept of putting months into an idea that I won't know the true implications of that project. I don't have issues with focusing on the journey, but I'm struggling with the idea of convincing others as to why go on the journey in the first place and that the struggles and set backs are worth it. I really want to know this, so that I can try to figure out how to coach my kids to understand that mindset... and they have less propensity of sticking to inventing/innovating than I do.
IMO surviving in an AI world is innovating and inventing. Pushing the minimum floor standard forward by also making the limit of the sky higher than the current understanding of that limit. I am not scared of AI at all, but it doesn't mean that I believe that it's daily application will come with basically zero proactive approaches in understanding how each person wants to do with it. I tell myself that infinity is just the limit of human understanding. Someone that looks back at history will find that what was considered infinity in those times, might be considered multitudes of finite in our times. This is easily noted via how many numbers of pi and how fast we can calculate those numbers on a computer 30 years ago vs today.
Like maybe it's a philosophical question of human interaction to AI, that can be mirrored about what AI might run into later. Why should someone invent if no one cares about what they make? If humans don't care about making something that AI can, why should AI make it/use resources doing it?
I keep hearing we can do so many things with AI and AI will help us... but my other question is what happens if we don't bother with AI? AI deals with the modern day to day so that our future generations can go back to living in worlds similar to those in the 1980s-90s where they would go out without parental worry/judgement to play, explore, imagine, interact and go home once it got dark? Or is VR > AR/XR a foregone conclusion?