• schizo@forum.uncomfortable.business
    link
    fedilink
    arrow-up
    29
    ·
    edit-2
    4 months ago

    Yes those are all lovely fancy numbers, but the only ones I really give a shit about come after the $, and the one that comes before the W on the power supply requirements.

  • neidu2@feddit.nl
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    4 months ago

    Yeah, about clock speeds. Remember why they were front and center 20 years ago when marketing CPUs? Intel started marketing CPUs by their clock speeds, hilighting that as a selling point over their competitors that usually ran at slightly lower clock speeds.

    But Intel painted themselves into a corner: Clock speeds don’t matter - instruction sets and floating point ops per seconds do. In the mid 2000s they had to slowly phase out the clock speed marketing, as clock speeds had reached such levels that further increases would be detrimental to performance, so they had to change their marketing and branding strategy.

    As soon as clock speed marketing had been phased out, Intel CPUs actually ran at lower speeds than the previous generation, while still outperforming them.

    I’m curious to see whether nvidia is about to do the same thing.

    • deegeese@sopuli.xyz
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      4 months ago

      GPU code is more amenable to high clock speeds because it doesn’t have the branch prediction and data prefetch problems of general purpose CPU code.

      Intel stopped chasing clock speed because it required them to make their pipelines extremely long and extremely vulnerable to a cache miss.

      • Dudewitbow@lemmy.zip
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        4 months ago

        also to bring a rudamentary comparison:

        a cpu is a few very complicated cores, a gpu is thousands of dumb cores.

        its easier to make something doing something low in instructions(gpu) faster than something that has a shit ton of instructions(cpu) due to like you mention, branch prediction.

        modern cpu performance gains is focusing more on paralellism and in the case of efficiency cores, scheduling to optimize for performance.

        GPU wise, its really something as simple as GPUs are typically memory bottlenecked. memory bandwidth (memory speed x bus width with a few caveats with cache lowering requirements based on hits) its the major indicator on GPU performance. bus width is fixed on a hardware chip design, so the simpilist method to increase general performance is clocks.

  • ramble81@lemm.ee
    link
    fedilink
    arrow-up
    12
    ·
    4 months ago

    Cool cool…. What about the price? That’s all I care about at this point.