Overall, when tested on 40 prompts, DeepSeek was found to have a similar energy efficiency to the Meta model, but DeepSeek tended to generate much longer responses and therefore was found to use 87% more energy.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    This is more about the “reasoning” aspect of the model where it outputs a bunch of “thinking” before the actual result. In a lot of cases it easily adds 2-3x onto the number of tokens needed to be generated. This isn’t really useful output. It the model getting into a state where it can better respond.