I reconnected with an old friend and we talked about AI. He's super onboard and told me all about his use of agents and how he's basically a programming super hero now with this tool and he's doing things that he never thought possible. Later on the call after we had some space from it I asked how his business was going. "Not good".
I think there's this real, chasm like gap that exists between how "neat" Ai's can be and how useful. So many people want to believe with all their heart we are just a couple years from AGI when it seems like we're closer to LLM's becoming worse than achieving anything approaching AGI. We're already seeing the shifting of expectations and now people reframing what AGI is.
Understanding context isn't something you can program. There's a real squishy part of our meat brains that is programmed to handle this stuff.
__________________
|