Man that's funny. An AI company that basically stole millions of copyrighted works to train it's model is now complaining that a competitor breached their terms of service in creating their own model.
And if creating a vastly more efficient model is just a matter of distilling OpenAIs outputs, then why didn't they ever bother to do that instead of wasting resources?
The Following User Says Thank You to opendoor For This Useful Post:
Man that's funny. An AI company that basically stole millions of copyrighted works to train it's model is now complaining that a competitor breached their terms of service in creating their own model.
And if creating a vastly more efficient model is just a matter of distilling OpenAIs outputs, then why didn't they ever bother to do that instead of wasting resources?
Using ChatGPT to train your model isn't a new concept either. There was a model out of Stanford a couple years ago (I can't find the name now) that did all the fine tuning to be a chat bot by interacting with chatGPT. They took it offline after legal threats. Obviously, those legal threats are less intimidating for Chinese companies.
Its called IP theft. It's easier to steal then create.
China has much more resources then any tech company in the States if need be.....
Anyone believing this was created for 6 million needs to give their head a shake.
They have all but admitted they have 50,000 illegal Nvidia Chips. These chips are $30-40 K each AND China would be buying on the black market.
The chips alone are 2.5 billion if bought at $50K each (Black Market prices?)
I mean nobody said stealing wasn't an available strategy.
So if they lied about the $6 million in an attempt to wage financial warfare on the west by tanking tech stocks and shorting Nvidia... this would have to be a decent time to buy Nvidia stock.
I mean nobody said stealing wasn't an available strategy.
So if they lied about the $6 million in an attempt to wage financial warfare on the west by tanking tech stocks and shorting Nvidia... this would have to be a decent time to buy Nvidia stock.
The end result might still be a giant threat to the amount of chips needed -
Just how they got they is mostly BS / leaving out important parts like multiple billion dollars of Nividia chips used
Deepseek (CHINESE CCP) lying? Say it isn't so! The fact that anyone here thought otherwise is the more laughable part.
I'm not sure what you mean by lying, they open sourced everything.
Now, I sure wouldn't be logging into their server and using it that way. Run it local or on a web service you might trust, depending what you are doing. If the model was feeding data to the CCP or anyone else, it would have come out by now, because, you know, anyone can look and I have zero doubt many propel have.
Just curious, do you have any clue how anything works before you post?
The Following 3 Users Say Thank You to Fuzz For This Useful Post:
Sorry what's the new scoop about Deepseek? Is it the open database that's readily available that got leaked? OpenAI had a similar issue a year ago.
There's a reason why AI LLM tools should be completely off-limits to reputable companies or use protected versions of the tools (and even that can be a vulnerability).
Deepseek, Chatgpt should absolutely be blocked to begin with. Assume everything and anything written or uploaded can be used and accessible.
Over 70 nations including China, the EU and India signed on to an agreement in Paris centered around responsible AI use and development, with the US and the UK refusing to sign on, which isn't a surprise.
Quote:
PARIS — U.S. Vice President JD Vance on Tuesday warned global leaders and tech industry executives that “excessive regulation” could cripple the rapidly growing artificial intelligence industry in a rebuke to European efforts to curb AI’s risks.
The speech underscored a widening, three-way rift over the future of the technology -- one that critics warn could either cement human progress for generations or set the stage for its downfall.
Quote:
Chinese Vice Premier Zhang Guoqing, speaking for President Xi Jinping, said Beijing wants to help set global AI rules. At the same time, Chinese officials slammed Western limits on AI access, and China’s DeepSeek chatbot has already triggered security concerns in the U.S. China argues open-source AI will benefit everyone, but critics see it as a way to spread Beijing’s influence.
With China and the U.S. in an AI arms race, Washington is also clashing with Europe.
Vance, a vocal critic of European tech rules, has floated the idea of the U.S. rethinking NATO commitments if Europe cracks down on Elon Musk’s social media platform, X. His Paris visit also included talks on Ukraine, AI’s growing role in global power shifts, and U.S.-China tensions.
Deepseek (CHINESE CCP) lying? Say it isn't so! The fact that anyone here thought otherwise is the more laughable part.
No, but only because we are already CCP owned, so what's the point. I AM working on how to use proprietary archival data to augment model training so they can use it to help with engineering tasks and business process optimization.
It's actually sad and hilarious how often we have to shut down data breaches that are coming from internal departments or Chinese IP addresses on generic attacks.
CCP and Chinese Economy in general seems like a million little mafia arrangements hiding in a trenchcoat, posing as a unified front. Everyone is in on a scam and a take. The takes go all the way up.
Every once and a while you see in-fighting between groups, and then you see wholesale leadership changes due to "corruption reviews", but all you're really doing is swapping out one scam bandit crew for another.
Entire offices full of people doing nothing, just so they can be employed. "Jobs" are more of a motivator than profitable operations. Team meetings are so weird, there is zero expectation for consistent work. Many times there are "no updates". When something needs to be done, though, it usually gets done pretty well until you run into someone who has status in the party and start having to deal with abuse, lies, and games.
Paradoxically, there are forced retirements at specific ages, with men being granted longer working windows than women, and managerial class being granted longer working windows than non-managers.
If you've been educated and trained in a western, profit motivated mindset, it is nearly impossible to reconcile and remain motivated. Working for Chinese SOE companies is the blurst. I must really suck at life because I keep ending up at places that sell to such entities, and it's been the same every time. No wonder this country is f'ed.
__________________
Quote:
Originally Posted by Biff
If the NHL ever needs an enema, Edmonton is where they'll insert it.
Built and deployed my first app yesterday. It's not useful to anybody but me and I'm definitely not a programmer, but there is certainly something happening and it's happening faster than I anticipated.
Just a few months ago you could build an app, but for a total rookie like myself setting up environments was a pain, debugging was unpleasant, and deploying was (to me) a non-starter.
All of a sudden we aren't even half way through February and suddenly the Replit agent spat me back a usable thing and deployed it within 30 minutes. I think I may have run into 3 issues, to which the solution was to kindly ask it to fix them.
Again, this is not me saying programmers are doomed, I'm amazing, or anything of the sort. But what I am saying is things are accelerating at a rate I didn't expect, and I'm insufferably optimistic. The number of "ya, but" issues are dropping like flies. The agentic behaviour + reasoning models is a wild 1-2 punch.
The point I have trouble getting across to people is how terrible I am at this stuff generally. I can't code to save my life, I don't even have a ton of agency... if I can do this, it means the barrier is probably more curiosity than anything else.
The Following 5 Users Say Thank You to Russic For This Useful Post:
The terrifying part is just how easy it makes these things, which means they will be abused and unleashed on society by criminals and foreign agents at a mass scale, and I really don't think the human brain is capable of handling that kind of potential deception.
The terrifying part is just how easy it makes these things, which means they will be abused and unleashed on society by criminals and foreign agents at a mass scale, and I really don't think the human brain is capable of handling that kind of potential deception.
I don't think the human brain is capable of handling a lot of the change that arrived 10,000 years ago with agriculture. Then the industrial revolution. Then social media. We've reached the blade of the development hockey stick and we're still grappling with not living on the Savannah.
I used ChatGPT the other day to analyze 1,000+ text comments for sentiment and trends.
Spat out a summary of the key themes in 5 seconds and you could continue to interact with the data. ex. 'give me examples of comments that referenced (insert trend)'. Amazing.
Also I had to break company policy (with permission) to do so because we have a blanket ban on using AI.
I've been using Bolt and it sucks. My prompt is now about 50 lines long.
These things suck at novel app creation. I don't need another goddamned "habit tracking" app.
Does Bolt use an agent? That's been a lifesaver for me on Replit. Before when I'd update anything it would break 2 seemingly unrelated things, but now with the agent it does all that in the background without the constant back and forth. Then again I've only made the one thing and it was relatively basic.
There's been a few significant model updates worth mentioning that gets us ever so closer to AGI. xAI and Anthropic have definitely been cooking.
Grok 3 is surprisingly good. You can try Grok 3 Think and Deep Research for free. It's currently up there with Open o1 and Deepseek R1 and in some ways even better.
I think the biggest new surprise has to be Sonnet 3.7 from Anthropic which was largely silent in recent months, but came out with an absolute coding beast. Sonnet 3.5 was a great coder and held up for nearly year even as reasoning models have become the new big step. Claude models (Sonnet, Haiku, etc) are also now available in Canada. It was for a long time not available and only available through 3rd parties such as Perplexity which is how I used it.
But Sonnet 3.7 itself isn't the only game changer. I've gotten release preview access for Claude Code which is an agent based coding tool that you can run directly in your project to update it live. One prompt, you get a fully working app with error handling and tested through the agent, and all files built. Installing Claude Code was an absolute pain as it's not currently native to Windows (Linux / Mac) but I got it working on my PC and laptop via WSL. There's currently a waitlist
You can use it to build full projects, review / update existing ones build in the past, there's a number of videos popping out on it. Here's a pretty good video example (this is extremely new and more are coming).
I haven't had a chance to play around with it much yet, but while in bed last night I had it built a browser based deluxe snake game to test it and how much it would cost, and a polished video editing browser app that extracts specific frames from videos that's of significant use to me (I will add it to one of my websites later on for public use because apparently good user friendly tools for it don't exist). Creating apps via AI coding isn't new, and agents aren't a new concept (Replit for example which Russic mentioned uses agentic coding), it's the most accurate coding agent and simply just gets it right every time. Claude Code can also be linked to Github and Vertex AI (Google) with more to come soon.
Biggest hurdle by far is the price. Sonnet 3.7's API price is infinitely more expensive than everything else outside of deep research models. Building 2 browser based apps cost me about 4$. Running more complex apps will cost hundreds of dollars. But the accuracy, speed and no touch approach is absolutely worth it. I will attempt bigger projects later.
LLM's are never going to evolve into AGI. They may get better at approximating it, but there is a huge difference between imitation and reality. If/when we do get AGI, it's not going to corm form refining LLMs.
Now, you can argue whether it matters or not if the imitation is good enough. But it won't be AGI.