3D-Daz Daz3d Art - Show Us Your DazSkill

5.00 star(s) 13 Votes

Dark_Sytze

Newbie
Mar 22, 2021
42
243
1. With more available RAM you you will get better performance from DAZ in all aspects, for example in the scene window where you do most of the work with poses, textures, props, hair and clothes pairing.

2. OptiX Prime Acceleration for my opinion saves 20-25% of rending time (on my 3950X 16 cores Ryzen 64Gb RAM). Turn it on in Render Settings tab:

View attachment 1353150
For i7 (4 or 8 cores) and 16 Gb RAM effect of OptiX should be less visible, but if you are not against to give DAZ all your available RAM (It will consume all, believe!), you can try.

3. With more available RAM your can render on CPU almost any complex scene. Yes, it is slower, then on GPU, but you will not have any real limitation for textures or models with GPU RAM 'overload' issue. Don't forget to include de-noising features, if you render on CPU, your will have a very good grain dumping even on 600-800 iterations with ~20-30% image official 'conversion':

View attachment 1353197
What kind of rendering times do you get with your CPU? I exclusively render on GPU, despite having a decent CPU (i9 9900K) and plenty of RAM. If it doesn't take ages compared to my 3070 I might consider it for some scenes. I have how NVidia reduced the VRAM on the newer cards, my 2080ti had a much nicer amount of VRAM available, 8GB is quite easy to fill with a simple scene.
 

AlexStone

Member
Aug 29, 2020
487
2,557
What kind of rendering times do you get with your CPU? I exclusively render on GPU, despite having a decent CPU (i9 9900K) and plenty of RAM. If it doesn't take ages compared to my 3070 I might consider it for some scenes. I have how NVidia reduced the VRAM on the newer cards, my 2080ti had a much nicer amount of VRAM available, 8GB is quite easy to fill with a simple scene.
Empty scene with single HD model and some props is about 30-40 min on CPU. Example:
AliceSmile.jpg

Complex environment with four HD models and a lot of props is about 2,5 hours on CPU. Example:

d20 nightT 8.jpg

Usually I render in 1920x1080 in landscape view and 3413x1920 in portrait view.
Example images done with no post-work denoising, only DAZ in-render denoising.

Since I'm rendering in the background anyway, under Wine on Linux, I'm quite happy with this approach (as for TOP utiliy info, DAZ uses about 14-15 cores out of 16 and 50-60% of RAM).
The computer is a 3950X Ryzen 16 core (no boost), 64Gb RAM.
 
Last edited:

Dark_Sytze

Newbie
Mar 22, 2021
42
243
Empty scene with single HD model and some props is about 30-40 min on CPU. Example:
View attachment 1353575

Complex environment with four HD models and a lot of props is about 2,5 hours on CPU. Example:

View attachment 1353583

Usually I render in 1920x1080 in landscape view and 3413x1920 in portrait view.
Example images done with no post-work denoising, only DAZ in-render denoising.

Since I'm rendering in the background anyway, under Wine on Linux, I'm quite happy with this approach.
The computer is a 3950X Ryzen 16 core (no boost), 64Gb RAM.
What settings do you use to render? Do you keep quality convergence on, or do you select a certain number of iterations?
I feel like CPU is not the way to go for me then, with my GPU a single HD model with some props won't take more than 5-10 minutes.
With four models the issue becomes whether it fits in the VRAM sadly.
 

AlexStone

Member
Aug 29, 2020
487
2,557
What settings do you use to render? Do you keep quality convergence on, or do you select a certain number of iterations?
I feel like CPU is not the way to go for me then, with my GPU a single HD model with some props won't take more than 5-10 minutes.
With four models the issue becomes whether it fits in the VRAM sadly.
I'm under no illusions: 16 cores (32 threads) of the most advanced CPU will never keep up with the ~2500 stream processors of 2070x, for example.

The main advantage of the CPU is the 'time free' noise reduction and the ability to pop virtually any scene of arbitrary complexity into RAM.

I don't focus on conversion ratio, in a noise reduction rendering situation it doesn't say anything anyway. On some scenes even 10000 iterations show 40-50% in this parameter.

That's why I'm more focused on the number of iterations itself. If the scene is well lit, 600-800 iterations are usually enough, if the composition has a lot of mirrored surfaces and dark corners, then it's better to wait until 2000-3000 iterations. After that, there is no special growth in quality, and it is not visible to the eye, to be honest.
 

Dark_Sytze

Newbie
Mar 22, 2021
42
243
I'm under no illusions: 16 cores (32 threads) of the most advanced CPU will never keep up with the ~2500 stream processors of 2070x, for example.

The main advantage of the CPU is the 'time free' noise reduction and the ability to pop virtually any scene of arbitrary complexity into RAM.

I don't focus on conversion ratio, in a noise reduction rendering situation it doesn't say anything anyway. On some scenes even 10000 iterations show 40-50% in this parameter.

That's why I'm more focused on the number of iterations itself. If the scene is well lit, 600-800 iterations are usually enough, if the composition has a lot of mirrored surfaces and dark corners, then it's better to wait until 2000-3000 iterations. After that, there is no special growth in quality, and it is not visible to the eye, to be honest.
Sorry, what do you mean by time free noise reduction?

I agree on the convergence, it's completely arbitrary and in many cases not necessary to reach 100%.

I generally just render at a higher resolution (2560x1440) and then at 2000 iterations. For the renders I use for my games I then downsize to 1920x1080 which also further increases quality
 

Virtual Merc

Member
May 7, 2017
296
6,662


View attachment 1353452

By no means a nag, it's just that the eye clings to this effect: areas of skin that have more or less the same lighting look completely different, with the hands looking more like plastic. Although normally, the character's hands are usually more reddish in colour, as the capillaries are closer to the skin and there are no fatty layers in these areas. Well, unless the hands are totally frozen. ;)

I'd look specifically at the skin shaders, maybe an overlay or some of the SSS tint or Translucency parameters fell out somewhere.
Protector2.jpg
any better?
tweaked the SSS but the skin itself has blotches and blemishes that are exaggerated in the harsh light so only so much I can do with this set up without affecting the whole scene.
 

AlexStone

Member
Aug 29, 2020
487
2,557
Sorry, what do you mean by time free noise reduction?
CPU cores can do noise reduction better and faster since they are using multilevel CPU cashe.
If you are using stream processors of GPU and VRAM, denoising is performed by CPU cores anyway, but you are loosing time for exchanging the data between GPU and CPU cores.
That's in the schematic, without going into a lot of detail.
 

AlexStone

Member
Aug 29, 2020
487
2,557
I generally just render at a higher resolution (2560x1440) and then at 2000 iterations. For the renders I use for my games I then downsize to 1920x1080 which also further increases quality
There are two unpleasant things about this approach, which could simply be described as 'let's leave for Photoshop to fight with the noise instead of us'.

Firstly, the rendering time grows by the square of the linear size of the image. That is, a 1.5x increase in rendering size gives a 2.25x increase in time for the same number of iterations.

In the case of going from 1920x1080 to 2560x1440, this square is 1.77, which is also painful. That is, in a time of 2000 iterations on 1920x1080, you can only do 1130 iterations on 2560x1440.

Secondly, Photoshop knows nothing about the 3D scene, so conventional bicubic interpolation simply averages out the brightness of neighboring pixels and 'blurs' the scene. As a result, the renderer loses sharp lines even where they could and even should be preserved.

So if I usually do this kind of noise reduction, it's by using Depth Canvas (they can be rendered in Canvases, Advanced Render Settings) as filter masks. Because it's one thing to 'blur' the background, and quite another to 'blur' the main character in focus.

Another approach, not through Canvases, is to render the main character on a completely white background (backdrop 255-255-255) and without any light sources, then you get a black and white mask. But here you have to look at the intersections of the character with other surfaces, unlike the Depth canvas there are differences from the full scene (for example, the character may slightly 'sink' into the ground).

But such a mask is ready at once, unlike the Depth canvas.

If you prepare your work in Photoshop in this way, you can suppress noise there too.

Good tutorial, if your haven't seen it yet: .
 

Dark_Sytze

Newbie
Mar 22, 2021
42
243
There are two unpleasant things about this approach, which could simply be described as 'let's leave for Photoshop to fight with the noise instead of us'.

Firstly, the rendering time grows by the square of the linear size of the image. That is, a 1.5x increase in rendering size gives a 2.25x increase in time for the same number of iterations.

In the case of going from 1920x1080 to 2560x1440, this square is 1.77, which is also painful. That is, in a time of 2000 iterations on 1920x1080, you can only do 1130 iterations on 2560x1440.

Secondly, Photoshop knows nothing about the 3D scene, so conventional bicubic interpolation simply averages out the brightness of neighboring pixels and 'blurs' the scene. As a result, the renderer loses sharp lines even where they could and even should be preserved.

So if I usually do this kind of noise reduction, it's by using Depth Canvas (they can be rendered in Canvases, Advanced Render Settings) as filter masks. Because it's one thing to 'blur' the background, and quite another to 'blur' the main character in focus.

Another approach, not through Canvases, is to render the main character on a completely white background (backdrop 255-255-255) and without any light sources, then you get a black and white mask. But here you have to look at the intersections of the character with other surfaces, unlike the Depth canvas there are differences from the full scene (for example, the character may slightly 'sink' into the ground).

But such a mask is ready at once, unlike the Depth canvas.

If you prepare your work in Photoshop in this way, you can suppress noise there too.

Good tutorial, if your haven't seen it yet: .
Interesting, I never realized it would take substantially longer to render at higher resolutions, I knew there was an increase but always assumed it was minimal.
In which case I will just render at 1920x1080 from now on, since I rarely get any noise (unless I improperly light a scene)
 

TacoHoleStory

Member
May 11, 2021
128
270
There are two unpleasant things about this approach, which could simply be described as 'let's leave for Photoshop to fight with the noise instead of us'.

Firstly, the rendering time grows by the square of the linear size of the image. That is, a 1.5x increase in rendering size gives a 2.25x increase in time for the same number of iterations.

In the case of going from 1920x1080 to 2560x1440, this square is 1.77, which is also painful. That is, in a time of 2000 iterations on 1920x1080, you can only do 1130 iterations on 2560x1440.

Secondly, Photoshop knows nothing about the 3D scene, so conventional bicubic interpolation simply averages out the brightness of neighboring pixels and 'blurs' the scene. As a result, the renderer loses sharp lines even where they could and even should be preserved.

So if I usually do this kind of noise reduction, it's by using Depth Canvas (they can be rendered in Canvases, Advanced Render Settings) as filter masks. Because it's one thing to 'blur' the background, and quite another to 'blur' the main character in focus.

Another approach, not through Canvases, is to render the main character on a completely white background (backdrop 255-255-255) and without any light sources, then you get a black and white mask. But here you have to look at the intersections of the character with other surfaces, unlike the Depth canvas there are differences from the full scene (for example, the character may slightly 'sink' into the ground).

But such a mask is ready at once, unlike the Depth canvas.

If you prepare your work in Photoshop in this way, you can suppress noise there too.

Good tutorial, if your haven't seen it yet: .
When deciding on my render settings for my game i did extensive tests, rendering at 1080x1920 and 2160x3840 with iterations at 100 and 400 respectively. The render time was basically the same and the quality of images were identical when reducing the 4k image down to match. And I mean identical. I would never be able to tell the difference and if the 4k image was blurred, it must have been incredibly subtle. Maybe there are some lighting situations or render settings that make one preferable over the other, but in my case, rendering at a higher resolution just added extra post work.
 

Seanthiar

Active Member
Jun 18, 2020
563
753
OptiX Prime Acceleration
Is that an old feature ? Because I only found it in an old sickleyield post from 2015 because I do not find it in my up to date version in the advanced tab of the render option and after that googled for it.
 
5.00 star(s) 13 Votes