really flashy guns and there is a very intricate damage system that runs at least partially on the GPU.
Short opinion: no, CPU’s can do that fine (possibly better) and it’s a tiny corner of game logic.
Long opinion: Intersecting projectile paths with geometry will not gain advantages being moved from CPU to GPU unless you’re dealing with a ridiculous amount of projectiles every single frame. In most games this is less than 1% of CPU time and moving it to the GPU will probably reduce overall performance due to the latency costs (…but a lot of modern engines already have awful frame latency, so it might fit right in fine).
You would only do this if you have been told by higher ups that you have to OR if you have a really unusual and new game design (thousands of new projectile paths every frame? ie hundreds of thousands of bullets per second). Even detailed multi-layer enemy models with vital components is just a few extra traces, using a GPU to calc that would make the job harder for the engine dev for no gain.
Fun answer: checkout CNlohr’s noeuclid. Sadly no windows build (I tried cross compiling but ended up in dependency hell), but still compiles and runs under Linux. Physics are on the GPU and world geometry is very non-traditional. https://github.com/cnlohr/noeuclid
Ooh thankyou for the link.
It sounds like they’re assigning materials based off the pixels of a texture map, rather than each mesh in a model being a different material. ie you paint materials onto a character rather than selecting chunks of the character and assigning them.
I suspect this either won’t be noticeable at all to players or it will be a very minor improvement (at best). It’s not something worth going for in exchange for losing compatibility with other GPUs. It will require a different work pipeline for the 3D modellers (they have to paint materials on now rather than assign them per-mesh), but that’s neither here nor there, it might be easier for them or it might be hell-awful depending on the tooling.
This particular sentence upsets me:
Uhuh. You’re not selling me on your game company.
“Before” ray tracing, the technology that has been around for decades. That you could do on a CPU or GPU for this very material-sensing task without the players noticing for around 20 years. Interpolate UVs across the colliding triangle and sample a texture.
I suspect the “more immersion” and “direct feedback” are veils over the real reasoning:
No-one sane implements Nvidia or AMD (or anyone else) exclusive libraries into their games unless they’re paid to do it. A game dev that cares about its players will make their game run well on all brands and flavours of graphics card.
At the end of the day this hurts consumers. If your games work on all GPU brands competitively then you have more choice and card companies are better motivated to compete. Whatever amount of money Nvidia is paying the gamedevs to do this must be smaller than what they earn back from consumers buying more of their product instead of competitors.