a GTX 1080 TI has more cuda cores (3.584 cuda cores) than a RTX 2070 (2304 cudas cores). the TI versions have more cuda cores than the non TI versions (Like the 2080 TI, 4352). So with a 2070 you may get a better card for gaming, but not for rendering. You can practically convert the speed in cuda cores.. a 1080 TI will go 1/3 faster than a 2070. So if the card is for rendering only check at cuda cores.
I am no expert here and everything I'm saying could be bullshit, there is lots of devs more experts on graphic cards, but that is more or less like that from checked on my flesh rendering with 3 different cards in the last year and what I read/heard around.
CPU goes good for render initialization (for example a better CPU is better initialization each frame of an animation), but once is rendering, practically all goes to cuda cores.
Also one need a Lot of ram. A scene with some characters and a enviroment can be 12-13 gb of ram, that duplicates on rendering. (I'm rendering a quite simple scene now, only daz and google, and I'm using 27gb of ram). Usually I work on various scenes at the same time so is common to have 30-45 gb of ram used. If you don't have enough ram, goes to virtual memory, so to HD speed, and that's not good, so you can't overlook that either.
But, it is going in the end of what kind of rendering you're going after. rendering sprites in a transparante layer, rendering animations with a transparent layer, for sure need lots less of specs than "fullscreen scenarios".