Daz3D hardware question.

Cyan

Member
Jul 25, 2017
126
551
I've been getting a lot of conflicting information when it comes to the best rendering hardware for Daz3D (even on this very site). So if anyone could throw me a bone, I'd appreciate it.

What kind of hardware is going to provide the fastest renders?

High-single threaded performance? 6600k/6700k/7700k/8700k?
High-Multi threaded performance? Xeon/7900x/7980xe/Ryzen 1950x

How much does GPU help in render times?

Nvidia series 9/10/Quadro/AMD-Vega?

Do multiple video cards affect performance? (Inside/Outside of SLI/Crossfire)

Ram size and bandwidth?

Harddrive bandwidth? SSD/NVMe recommended specifically?
 

Bip

Active Member
Donor
May 4, 2017
734
2,093
SLI disabled. SLI can significantly slow down Iray.
ECC disabled. Same as SLI, GPU's ECC memory can significantly slow down Iray.
The ideal is to render with a graphics card that is not used for display. No display connected and no desktop extended onto it. A history of runtime restriction.

The memory of the graphics card must be able to contain the whole scene. If it can't, the GPU will not be used. And you do not want to use the CPU to render, believe me.
If you have a multi GPU, each card must be able to fit the whole scene.

Multi GPUs works well (remember, SLI disabled!). You should have a 20-30% gain for each card added.
Then I think the best (in the case of a single computer) is having a not too bad multi-threaded CPU for working, a not too bad graphics card for display and the best graphics card(s) you can afford to render.

And now, the question... 1 Quadro or 2 Titan XP? 1 Titan XP or 2 GTX1080TI? That... I don't know. Personally, I would prefer 1 better card to 2 less powerful ones, but it's very personal.

Another thing, Iray uses CUDA cores or CPU. So the best AMD card will not help, it will be the CPU that will render.
 
Last edited:

Morpheus668

Active Member
Jun 22, 2017
873
2,780
yup. what Bip said.
A good multithreaded CPU (hopefully by end of year ima grab me a 8700(as I dont have $1000+ for a CPU))
the more sys ram the better (currently at 16, gonna double it on next build(slowly thanks to the gang-rape prices of mem atm))
A single uber nvidia GPU used only for rendering (a AMD GPU will just sit there being ignored(trust me, I know))
 

OhWee

Forum Fanatic
Modder
Game Developer
Jun 17, 2017
5,610
28,219
It all comes down to budget.

Also if you are planning on going the Iray route (preferred by a number of Daz PA's) or 3Delight.

The 1080 Ti is an awesome choice, due to it's 11 GB of onboard memory, and it's speed. You can pay almost twice as much for the Titan XP, but you won't see that much of a performance boost (you'd be better off with dual 1080 Ti's). Of course, hopefully the Titan XP's are built to better withstand the abuse of rendering than the regular 1080 Ti's but I haven't come across any real world case evidence for this yet.

If you are running windows 10, keep in mind that you'll lose about 18% of your VRAM to the OS when using Iray. So the bigger cards have a larger amount of VRAM that is 'lost' to the OS... people have been complaining about this to Microsoft (a set amount would make more sense), but so far Microsoft has been blowing this off...

As for CPU, AMD Threadripper is awesome for the value, but of course if you can afford it the i9-7980 XE is a nice choice. Don't sweat running GPU cards at x8 vs x16, multiple benchmarks have shown that currently x16 only gains you a couple of percentage points over x8, so the gain you get from the multiple cards easily offsets that minor loss.

Threadripper has 60 PCIe lanes available (so x16/x8/x16/x8), but the i9-7980 can do just fine with it's 44 PCIe lanes (4 cards at x8 each, with maybe one at x16).

The trick is to find a motherboard that can support 4 (double width) graphics cards, especially for the X299. And pay attention to any 'headers' which might be next to a PCIe slot, making them 'unavailable' if a double width GPU card is in that slot.

2 GPUs gains you just shy of a 50% reduction in rendering speed. 3 cards will drop it almost to a third, and of course 4 cards will drop it to about a quarter AS LONG AS the scene can fit inside the GPU memory for EACH card (i.e. the memory doesn't 'stack', the scene has to fit into each card involved). So it's diminishing returns after the second GPU.

The high end 'pro' GPU cards are interesting, but of course cost serious bucks...

Having a 'spare' GPU to run the monitor while you are rendering is good advice. The onboard IGP (if your CPU has one) can do this too, as long as your bios allows it (most should, but can't hurt to double check).

The number of cores in your CPU normally won't really come into play UNLESS the scene can't fit inside of graphics memory, at which point the more cores the better! Or if you are using 3Delight, which relies on CPU cores.

A good SSD or NVMe drive is probably a good choice for the 'OS' disk, but also get a large HDD to store stuff on. 4 TB+ HDD's are pretty cheap these days. 64 GB of memory should be more than enough (some would argue that 32 GB is plenty). I'm running an 8 GB ramdisk currently, and have a few of my Daz cache files in the ram disk, which seems to help with performance by a small but very noticeable bit...

Finally, SERIOUSLY look at water cooling options for your CPU and GPU. Rendering is pretty hard on system components, so it's worth the extra protection. Also, overclocking is NOT recommended on rendering systems.

There, hopefully that should get this discussion rolling.
 
Last edited:
  • Like
Reactions: Ōba and penecultor

Morpheus668

Active Member
Jun 22, 2017
873
2,780
well yeah, i presumed as much. hence the ;)
they dont even make 4Gb HDD's anymore do they?
well, they do, they just call them thumbdrives lol
 

Bip

Active Member
Donor
May 4, 2017
734
2,093
...
2 GPUs gains you just shy of a 50% reduction in rendering speed. 3 cards will drop it almost to a third, and of course 4 cards will drop it to about a quarter
...
You're sure of that? I've read that the gain was more about 20-30% per card added.
 

OhWee

Forum Fanatic
Modder
Game Developer
Jun 17, 2017
5,610
28,219
You're sure of that? I've read that the gain was more about 20-30% per card added.
OK, these benchmarks are for Octane Render, but the commentary I've read on the Daz forum also supports these numbers:


The above also talks about x8 vs x16. the article is slightly dated (April 28th, 2016), but not overly so.
 

Cyan

Member
Jul 25, 2017
126
551
Thank you very much for taking the time out to answer my noob questions.

You're sure of that? I've read that the gain was more about 20-30% per card added.
That's more or less what I was referring to as conflicting information. Daz3D has apparently changed quite a bit over the years. Some of the information I had showed that adding a single GPU gave little to no improvement to render times (diminishing returns), whereas other information had shown similar results as this.

I believe I understand the big picture though.

I have two more questions now, if you all will indulge my ignorance even further.

Say for example I had a computer with 4x TitanX XP's or 4x 1080Ti's in a system. How much improvement in render time will i get from a cheap, crap, old cpu with plenty of CPU lanes for the GPU's, vs a a high end, current i9/Xeon CPU?

GPU is clearly king from what's been said here (and typically better CPU's have more PCIe lanes), but my question is how much will the CPU affect render times with all that being said? An insignificant amount compared to the GPU? Is it similar to crypto-currency mining in that you only need a CPU to facilitate the GPU's?

My other question is about Bip talked about in regards to Iray. I'm not entirely familiar with it to be honest, but how does all this information compare to different types of environments other than Iray? Is Iray king? Does it mostly hold true for any other types I would potentially use?
 

OhWee

Forum Fanatic
Modder
Game Developer
Jun 17, 2017
5,610
28,219
Hey Cyan!

So, if you are using Iray in Daz, it has the advantage of being much faster than CPU only renders (if you have Nvidia cards). That's the main adavantage.

Stylistically, some prefer 3Delight to Iray, others prefer the Iray look. That pretty much comes down to taste and what you are looking for in your scenes.

It's been mentioned in one post on the Daz forum that using a 1080 Ti for an iray render is 3x faster than say a 16 core/32 thread Threadripper. Not a controlled benchmark, just a 'gut' test, but this'll give you an idea of why iray is such a big deal, and why so many PA's are making sure they have Iray-optimized textures included in their products.

As to whether you need the 'high end' CPUs, as long as you can keep the rendering on the GPUs, then the extra cores on your CPU won't give you very much of a bump. It's when you are working on complex scenes that are too much for the GPUs to handle (say you have around a dozen or more characters in a diner...) and it drops to CPU only when you'll appreciate having the extra cores.

Of course, finding a 'midrange' CPU that has a lot of PCIe lanes is important. So on the AMD end, you could buy a cheaper Threadripper (say the 8 core one), which STILL has quad channel memory support and 60 available PCIe lanes. I'm less acquainted with the ecosystem on the Intel side, but yeah you could get lower core count versions of the i9's too, although as I understand as you move down the intel product tree, you'll start seeing lower PCIe lane counts for the CPUs.

Threadripper looks a lot more attractive on the low end of the TR family, due to the fact that it doesn't 'gimp' the system when using lower core counts. So you can buy the cheaper version now, and maybe upgrade to the 'full version' later if budget is an issue.
 

Cyan

Member
Jul 25, 2017
126
551
Thank you very much for the information!

My partner and I are making a game that will be released in a few weeks(ish). I hope you'll check it out!
 

OhWee

Forum Fanatic
Modder
Game Developer
Jun 17, 2017
5,610
28,219
Just a note:

The 16 core Threadripper dropped in price in a few places this week (Newegg, Amazon, etc.), to below $900 US. This makes it an even more compelling choice for a workstation than it was before, if you are on a bit of a budget. It aso makes the argument for the 8 and 12 core versions a bit harder, since 16 core is a bit cheaper now...

The 18 core i9-7980 XE is of course awesome, but pricey (over $2000 US currently). Plus it generates more heat than Threadripper, so keep that in mind when pondering a good cooling solution for it. It has less PCIe lanes than Threadripper (44), but as I mentioned before, x8 isn't that big a deal vs x16 currently if you are looking at that 4 GPU setup. You'll need to find an ideal mobo for that 4 GPU config, of course...

On the EPYC side, there are some interesting choices as well at some very compelling price points (especially in the single socket config), but EPYC CPUs are slower than Threadripper, so the extra speed of Threadripper partially offsets the extra cores you might be able to acquire in an EPYC build. Plus there's the whole trying to find a decent motherboard for multiple GPUs for EPYC, and other considerations (finding one, adapting EPYC to a workstation environment, etc.). I'd love to hear about some real world experiences of people using EPYC in a render workstation environment.

There's also the Xeons out there. Again, I'd be interested to hear about some real life experiences using dual Xeons, etc. in a GPU workstation environment.

If you are working with 3Delight in Daz, yeah you are going to want lots of fast cores... some products are 3Delight only (such as some cool gel lights I picked up recently, they don't work the same in Iray/no gobo effect), so there may be those times when you need to go the 3Delight route, even if you are exclusively Iray at other times...

BTW, here's the thread on the Daz forums where people have been sharing their Iray render times using the same scene (with a link to the benchmark scene should you be curious). This can give you quite a bit of insight as to what to expect using Iray with Multi GPU, various GPUs, CPU only, when GPU + CPU makes sense, etc.
 
  • Like
Reactions: ChocolateDouchebag

Cyan

Member
Jul 25, 2017
126
551
Well catch 22, if I'm going to fork over the cash for 4x high end GPU's (since it has more benefit in rendering than a cpu would), it's not much of a stretch to drop another grand for a CPU that far out performs threadripper (particularly when overclocked).

Just from looking at some of the results, in pure Iray rendering doesn't look like there would be any noticeable difference between threadripper and the 7980xe for example, assuming I was also using 4x titans/1080ti's.

That's like a 8-10 thousand dollar system though, and not exactly something I was going to aim for immediately. Maybe an 8700k with dual 1080ti's would be better, though that removes any possibilities for more GPU's (though at a price point of less than half of the original, it's hard to complain).
 

Bip

Active Member
Donor
May 4, 2017
734
2,093
I thought about it a few months ago, just before buying a big set of new sails and boards for windsurfing x'D

Firstly, I'm going to look for components compatible Hackintosh. OS X is very stable, I have never had a crash in 10 years, whether with a Mac or a Hack, and is much less resource intensive than Windows. So, for equivalent power, I can buy less powerful components ... so cheaper.

Secondly, there is no real need for 18 Titan XP GPUs. With 1 or 2 GT1080TI, I'll spend more time creating the scene than rendering it. Then, I can work during the day and use a Daz3d Batch Render to render them when I sleep/.....

All that to say that your idea of 8700K with 2 GT1080TI is really not ridiculous ;) Just, do not skimp on cooling!
 
  • Like
Reactions: OhWee

OhWee

Forum Fanatic
Modder
Game Developer
Jun 17, 2017
5,610
28,219
I tend to agree with Bip. After the first 2 1080Tis, the improvement by adding a third or fourth really isn't all that significant. And the extra CPU cores really only come into play for complex scenes that can't fit in a 1080Ti, so in most cases for Iray anyways, those extra cores may benefit you in multitasking but not in rendering. So a good 8 core CPU with two 1080 Ti's is really an excellent setup for a lot of situations. It all comes down to budget and bang for buck.

If you go the 3Delight route, the calculus is entirely different - more CPU cores good, faster CPU cores good. External rendering solutions (Octane Render, etc.) may change this calculus a bit, but you get the idea.

If you have an unlimited budget, by all means go hog wild, but if you want to save some of those pennies for other things, yeah things like a 16 core Threadripper for less than $900 with two 1080 Ti's looks VERY attractive...
 

Bip

Active Member
Donor
May 4, 2017
734
2,093
...
And the extra CPU cores really only come into play for complex scenes that can't fit in a 1080Ti,
...
Speaking of that, do you know if there's a way to see the weight of the scene?
 

Cyan

Member
Jul 25, 2017
126
551
And the extra CPU cores really only come into play for complex scenes that can't fit in a 1080Ti, so in most cases for Iray anyways, those extra cores may benefit you in multitasking but not in rendering.
Are you saying a Quadro with ~24 GB's of of VRAM would technically be better in terms of rendering (assuming the same processing power/cuda)? Also assuming you'd have an enormous scene requiring over 11GB worth of vram.

As long as we're making up systems, might as well go for a server setup with 4x xeon 8180's and 8x quadro p6000's. That's only like a 100 thousand dollar setup lol
 

OhWee

Forum Fanatic
Modder
Game Developer
Jun 17, 2017
5,610
28,219
There aren't any 24 GB Quadros in the Daz Iray benchmark thread (at least not that I found), I only found the K500M 4 GB ones, so I don't have any suggestions as to how well they might do.
.
There is a quad xeon setup that has posted benchmarks there though... on page 8
My setup:

4 x Xeon E5-4650 (64 cores in total)

128gb DDR3 RAM @ 1600 MHz (ECC)

2 X Quadro K6000

Time: 17 seconds.
17 seconds is quite fast for this test, and beats even 4 1080 Tis... that is impressive...

This is from page 23:
1 x GTX1080TI:

2017-10-15 21:02:33.624 Finished Rendering
2017-10-15 21:02:33.674 Total Rendering Time: 1 minutes 57.85 seconds

1 x GTX 1080TI + 16 Core Threadripper

2017-10-15 21:04:56.331 Finished Rendering
2017-10-15 21:04:56.410 Total Rendering Time: 1 minutes 39.59 seconds

2 x GTX1080TI:

2017-10-15 21:06:25.664 Finished Rendering
2017-10-15 21:06:25.741 Total Rendering Time: 59.96 seconds

2 x GTX 1080TI + 16 Core Threadripper

2017-10-15 21:07:58.279 Finished Rendering
2017-10-15 21:07:58.333 Total Rendering Time: 56.82 seconds

3 x GTX1080TI:

2017-10-15 21:09:03.950 Finished Rendering
2017-10-15 21:09:04.098 Total Rendering Time: 42.14 seconds

3 x GTX 1080TI + 16 Core Threadripper

2017-10-15 21:10:08.587 Finished Rendering
2017-10-15 21:10:08.667 Total Rendering Time: 41.17

4 x GTX1080TI:

2017-10-15 21:11:20.368 Finished Rendering
2017-10-15 21:11:20.465 Total Rendering Time: 33.1 seconds

4 x GTX 1080TI + 16 Core Threadripper

2017-10-15 21:12:11.197 Finished Rendering
2017-10-15 21:12:11.269 Total Rendering Time: 33.8 seconds

4 x GTX 1080TI + 16 Core Threadripper WITHOUT OptiX

2017-10-15 21:13:31.895 Finished Rendering
2017-10-15 21:13:32.004 Total Rendering Time: 52.50 seconds



WOW... the result without OptiX is really bad.

What can i say ... it seems drzap is totaly right. The more GPUs you use, the less important is the CPU. However .. i could imagine a scenario in which somebody is building up a new render rig. Athe beginning, with only one GPU installed, the Threadripper is a good help. Afterwaards it gets quiet useless.
BTW, for comparison, my dual mobile 1080s clock in at around 1.25 on this test.

And as to how a P6000 quadro stacks up against a 1080Ti, I don't have any direct comparisons, but here's what videocardbenchmarks has to say on the subject r.e. the Passmark benchmark:


I've exceeded 8 GB (6.4 or so usable) on my 1080s/been kicked a few times to CPU only in scenes I woudn't consider all that complex. Each character you add to a scene really bumps the needed rendering power, so in one case I had to drop a scene from 9 to 7 characters (doing a second pass with the rest of the characters) just to get it to fit in each of my 1080s. And there was another scene that I didn't look too complex on the surface, with only a couple of characters, that kept going CPU only on me too. So yeah, this can be a real issue.

So while exceeding even a Titan XP's memory might be more rare, I can see it happening for some of the more ambitious renders... in which case those p6000 quadros look a bit more appealing...

Of course, for the price of one p6000, you could probably build a quad 1080 Ti Threadripper system (liquid cooling may push you over the top on that budget)...