

25 W idle * 1 year = 219kWh
ANS * 0.21 EUR/kWh = 45.99 EUR
I’d say that’s still a significant amount, even if you subtract from that amount the time you use the computer.
25 W idle * 1 year = 219kWh
ANS * 0.21 EUR/kWh = 45.99 EUR
I’d say that’s still a significant amount, even if you subtract from that amount the time you use the computer.
Include computercraft and you can set up a connection back to the real world!
Dionaea muscipula
I was in a building that was rebuild after a fighter jet crashed into the one before it…
Or about half a year if we’re only counting the time during which I’ve been alive.
13.787 ± 0.020 billion years
Why does look like another bot post?
The simlutation terminates.
I’m curious, how do you run the 4x3090s? The FE Cards would be 4x3=12 PCIe slots and 4x16=64 PCIe lanes… Did you nvlink them? What about transient power spikes? Any clock or even VBIOS mods?
I’m also on p2p 2x3090 with 48GB of VRAM. Honestly it’s a nice experience, but still somewhat limiting…
I’m currently running deepseek-r1-distill-llama-70b-awq with the aphrodite engine. Though the same applies for llama-3.3-70b. It works great and is way faster than ollama for example. But my max context is around 22k tokens. More VRAM would allow me more context, even more VRAM would allow for speculative decoding, cuda graphs, …
Maybe I’ll drop down to a 35b model to get more context and a bit of speed. But I don’t really want to justify the possible decrease in answer quality.
I’m running such a setup!
This is my nixos config, though feel free to ignore it, since it’s optmized for me and not others.
How did I achieve your described setup?
NixOS just sits on your face. All the stuff in front of you is awesome. Though you might suffocate at any moment given the options. Oh and sticking your nose too deep into things might get you a broken nose.
Thanks for the writeup! So far I’ve been using ollama, but I’m always open for trying out alternatives. To be honest, it seems I was oblivious to the existence of alternatives.
Your post is suggesting that the same models with the same parameters generate different result when run on different backends?
I can see how the backend would have an influence hanfling concurrent api calls, ram/vram efficiency, supported hardware/drivers and general speed.
But going as far as having different context windows and quality degrading issues is news to me.
Is there an inherent benefit for using NVLINK? Should I specifically try out Aprodite over the other recommendations when having 2x 3090 with NVLINK available?
I’m not sure if I can donate to LixOS, yet…