Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Even if you eliminate all of OpenAIs other costs besides inference they're still in the red. And they can't just stop training new models. That's like saying Honda can just quit designing new cars. They technically could, but it would destroy their business.

They have one method of monitization right now, and there is no clear evidence that their costs are suddenly going to decrease anytime soon. Despite claims to the contrary, no one has actually provided any evidence of a pathway to those costs magically cutting in half over the next few years.

The entire industry is being propped up by insane over investment and an obsession with growth at all costs. Investments will dry up sooner or later, and you can't grow forever.



Inference keeps getting cheaper, so "it isn't cheap enough yet" isn't an issue. Even with zero efficiency innovations from here, cost per instruction is the most deflationary commodity of all time.

So how was that ever going to be a problem?

The optimal choice for marginal costs, which will naturally drop on their own, at the beginning of a new tech cycle is to run in the red. It would be a sign of gross incompetence if they were fine tuning those costs already.

Training spend is the giant expense. And either training costs are unsustainable, and training spend will hit a pause, or it is not unsustainable and training spend will continue.

So, which is it?

Critical point: The majority of their costs are not required to serve the highest level of capability they have achieved at any given time.

That is unusual. In the sense that it is an exceptionally healthy cost control structure. Note that not even open source offers a cost advantage, for training or inference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: