Quote:
Originally Posted by opendoor
I don't know, everything I've seen from it still looks pretty hokey. Even the best examples are still pretty 2D looking, but they rely on lots of zooms and pans to dress it up and hide the flaws.
And I don't think it's even clear that the current path of using LLM-based generative AI to produce video will ever even be successful in commercial-level production, except maybe for some stock footage where the details don't matter too much.
If they can figure out a way to take properly done 3D models and then animate them based on text input, then we'd definitely have something that could be used for high-level production. But my understanding is that's estimated to be pretty difficult to achieve at this point.
|
This is a very common line of thinking but it feels like many aren't extrapolating where it's going or how fast it's getting there. What's giving people shoulder-shrugs today would have been cast off as pure witchcraft a year and a half ago. Will it level out? Maybe, but I'm not sure we're seeing evidence of that yet (I could be wrong).
I don't think what you're saying about commercial applications sounds off at all, but I do think that the history of tech is positively swimming in examples of times we said "I don't think <thing> will ever get powerful enough to..." followed shortly by it happening. There's always some unexpected chaos that gets added to the mix that people don't see coming.
I've played around with Luma, and while it's not going fool everybody, it's spooky good, especially for a company that seemingly just sorta showed up.