Couldn't agree more. Your analysis rreally puts OpenAI's valuation into perspective. But how do you see the massive infrastructure costs shaping the market for smaller model developers?
not sure i get the question. If it is about training small LLMs or as they call it SLMs, there is virtually no usage so far because noone wants to use the old things but everyone wants to use the new shiny thing.
I think it will always be secondary to performance. However some enterprises are already training their own small models yet when you talk to employees, they rarely use those models.
The infra costs in short are not going to benefit in house model development because there is no service provided for this other than Coreweave and for that you need to hire a lot of researchers who would rather go work at OpenAI
agreed: I think the best way to look at this is based on end user usage.
I am now doing some more data research to compare this to fiber layout where 90% of the infra was dark (unused). The current claim made by Big Tech is that there is so much use in GPUs that they are melting.
However there are certain differences: 1. GPUs have a short depreciation period whereas fiber goes almost forever.
2. Usage should not be measured by if companies building AI products are using but if the end user is actually using it. They might turn out to be the same thing but worth looking into
Couldn't agree more. Your analysis rreally puts OpenAI's valuation into perspective. But how do you see the massive infrastructure costs shaping the market for smaller model developers?
not sure i get the question. If it is about training small LLMs or as they call it SLMs, there is virtually no usage so far because noone wants to use the old things but everyone wants to use the new shiny thing.
I think it will always be secondary to performance. However some enterprises are already training their own small models yet when you talk to employees, they rarely use those models.
The infra costs in short are not going to benefit in house model development because there is no service provided for this other than Coreweave and for that you need to hire a lot of researchers who would rather go work at OpenAI
agreed: I think the best way to look at this is based on end user usage.
I am now doing some more data research to compare this to fiber layout where 90% of the infra was dark (unused). The current claim made by Big Tech is that there is so much use in GPUs that they are melting.
However there are certain differences: 1. GPUs have a short depreciation period whereas fiber goes almost forever.
2. Usage should not be measured by if companies building AI products are using but if the end user is actually using it. They might turn out to be the same thing but worth looking into