I believe that the difference between the number of CUDA/Tensor cores don't worst the difference, you are paying for the additional 3gb VRAM. In my case, I was using 2x 1070 and sometimes 8gb weren't enough to process my entire scene, so I picked up a 2080 ti.
With Daz 4.11 we still don't have the RTX extra power working, only on 4.12 beta. Users are reporting a 40% improvement on render time due this change, between 4.11 and 4.12. I'm in a critical moment in my project right now, couldn't tested the beta yet to ensure that.
And it is true, the heat from 2080ti really bother me, I got a Galax dual fan model. I have 7 fans on the cabinet, 3 cheap and 4 Riotoro, I set GPU fans to 92% before render and still the card is always around 84º. If I could I would take a model with water coolers.
On temperatures, the brand and assembly of the card can also make a difference.
In one of my computers I have two 1080Ti. The MSI model, Armor, when used at maximum load, reaches 84º in seconds and can reach a maximum of 90º (I also have to say that it is the card that is on top of the other, so there is a small separation with the card below and this can also influence); instead, the other card, a Palit GameRock, at maximum load after hours, remains quietly at 64º ... the difference is that the Palit has twice the height of the MSI as it has a huge heatsink.
(Note: It would be impossible to have 2 Palit in my computer connected directly, being its height so large, if I connect one in the first slot PCI-E the second would be covered. For those who want to be able to connect several GPUs and have them well cooled and with space between them, there are solutions like:
You must be registered to see the links
)
Only thing you need to check for daz, Cuda cores, more cores, faster renders, that's all.
That's right! But VRAM is just as important because if the scene can't be loaded into it, it won't be GPU-rendered, so the more, the better.
Although the models with the most VRAM are the ones with the highest gamma, so they will also be the ones with the most CUDA cores
But let's imagine for a moment that the new models of the 20xx series optimize enough, and that they make the 2080 Super (with its 8GB) render faster than the 1080Ti (with its 11GB)... Which would I prefer? Well, I would keep the 1080Ti because I won't suffer to see if the scene fits or not in its 11GB and because I render at night and I don't care if it takes a little longer
(either this or 2080Ti... but having had 11GB, having previously a 1060 of 6GB, I don't go back, not under 11GB
)