• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Yeah. I don’t believe market value is a great indicator in this case. In general, I would say that capital markets are rational at a macro level, but not micro. This is all speculation/gambling.

    I have to concede that point to some degree, since i guess i hold similar views with Tesla’s value vs the rest of the automotive Industry. But i still think that the basic hirarchy holds true with nvidia being significantly ahead of the pack.

    My guess is that AMD and Intel are at most 1 year behind Nvidia when it comes to tech stack. “China”, maybe 2 years, probably less.

    Imo you are too optimistic with those estimations, particularly with Intel and China, although i am not an expert in the field.

    As i see it AMD seems to have a quite decent product with their instinct cards in the server market on the hardware side, but they wish they’d have something even close to CUDA and its mindshare. Which would take years to replicate. Intel wish they were only a year behind Nvidia. And i’d like to comment on China, but tbh i have little to no knowledge of their state in GPU development. If they are “2 years, probably less” behind as you say, then they should have something like the rtx 4090, which was released end of 2022. But do they have something that even rivals the 2000 or 3000 series cards?

    However, if you can make chips with 80% performance at 10% price, its a win. People can continue to tell themselves that big tech always will buy the latest and greatest whatever the cost. It does not make it true.

    But the issue is they all make their chips at the same manufacturer, TSMC, even Intel in the case of their GPUs. So they can’t really differentiate much on manufacturing costs and are also competing on the same limited supply. So no one can offer 80% of performance at 10% price, or even close to it. Additionally everything around the GPU (datacenters, rack space, power useage during operation etc.) also costs, so it is only part of the overall package cost and you also want to optimize for your limited space. As i understand it datacenter building and power delivery for them is actually another limiting factor right now for the hyperscalers.

    Google, Meta and Amazon already make their own chips. That’s probably true for DeepSeek as well.

    Google yes with their TPUs, but the others all use Nvidia or AMD chips to train. Amazon has their Graviton CPUs, which are quite competitive, but i don’t think they have anything on the GPU side. DeepSeek is way to small and new for custom chips, they evolved out of a hedge fund and just use nvidia GPUs as more or less everyone else.



  • I have to disagree with that, because this solution isn’t free either.

    Asking them to regulate their use requires them to build excess capacity purely for those peaks (so additional machinery), to have more inventory in stock, and depending on how manual labor intensive it is also means people have to work with a less reliable schedule. With some processes it might also simply not be able to regulate them up/down fast enough (or at all).

    This problem is simply a function of whether it is cheaper to a) build excess capacity or b) build enough capacity to meet demand with steady production and add battery storage as needed.

    Compared to most manufacturing lines battery tech is relatively simple tech, requries little to no human labor and still makes massive gains in price/performance. So my bet is that it’ll be the cheaper solution.

    That said it is of course not a binary thing and there might be some instances where we can optimize energy demand and supply, but i think in the industry those will happen naturally through market forces. However this won’t be enough to smooth out the gap difference in the timing of supply/demand.


  • It’s a reaction to thinking China has better AI

    I don’t think this is the primary reason behind Nvidia’s drop. Because as long as they got a massive technological lead it doesn’t matter as much to them who has the best model, as long as these companies use their GPUs to train them.

    The real change is that the compute resources (which is Nvidia’s product) needed to create a great model suddenly fell of a cliff. Whereas until now the name of the game was that more is better and scale is everything.

    China vs the West (or upstart vs big players) matters to those who are investing in creating those models. So for example Meta, who presumably spends a ton of money on high paying engineers and data centers, and somehow got upstaged by someone else with a fraction of their resources.