There's another problem with combining two different cards for iRay rendering: the graphical memory of the two cards is not combined, but gets *capped* at the memory size of the smallest card. So, let's say, Notty gets an RTX 2070 with 8GB, and adds her old GTX 960 with 2GB to the rig, then that effectively limits the size of the scenes she can render to only 2GB.
Larger scenes (very easy to get a scene over 2GB, just include 2 characters, some props and a bit of complex lighting, and you're over that amount) will then revert to CPU rendering, which will take days (seriously, days!) to get to the quality Notty currently produces. Daz won't automatically ignore the smaller GPU for larger scenes.
For scenes smaller than 2GB, she'd maybe gain a 20% increase in render speed, but those scenes already render relatively fast, so the gain would be 16 minutes to render instead of 20. It already takes longer to set up a scene in the first place, so that gain is really marginal, might as well stick with the 20 minute render and use the 20 minutes to grab a coffee.
A GTX 1060 (either 6GB or 3GB) and an RTX 2070 are already a bit closer in memory size, and more scenes would already fit into the 3GB version. The gain in renderspeed is also already a bit bigger again, as the 1060 has (depending on model again) 10-20% more CUDA cores.
Overall, combining a suitable high-memory card with a low-memory card is like throwing away half the card. Especially if the old card was already restricting your work, then it's generally not a good idea to combine it with the new card for rendering. Maybe disable it for rendering, and using the old card purely as your display driver works out (there is an option in Daz to manually disable a card for rendering). It'll require some fiddling with some settings in your OS and connecting the monitor to the old card instead of the new card, but it's possible.