The biggest issue with using the in-built denoiser is that it is the
only copy you get. If you render the 'noisy' version, you're able to use an external denoiser and blend/layer mask both versions to get the best of both. Whereas, if you're using the in-built denoiser, you're getting rid of the noise, but losing the skin/hair/etc. detail with it. That's a pretty huge downside, imo.
There's no misconception to be had as that's the inherent science of denoising. In the most absolute basic terms, denoising is the blurring of nearby pixels (e.g. gaussian blurring, typically) to both remove noise and blend in with the rest of the image. If you're blurring images, what's going to happen? A bit of an extreme example:
View attachment 3626430
What happens if you add small amounts of gaussian blur to a render? Gradual detail loss.
View attachment 3626438
That's exactly why it's repeated ad-nauseum that you shouldn't use a denoiser (especially Daz's in-built one) if you can avoid it. Seeing as he/she has a 3090, there's zero actual point for them to be natively rendering a fully denoised image. They can always use an external one later if he/she needs to.
There's a reason many experienced devs will tell you that the only time you should be using the in-built denoiser is for animations.
MissFortune post is very interesting, but may lead to confusion. Denoising indeed is a low-pass filter (like a gaussian blur), but it is
only applied to outlier pixels. An outlier pixel is supposed to be significantly different form all its surrounding pixel. The first denoisers used a more or less large neighborhood and an empiric threshold to decide if a pixel is an outlier or not. Recent ones use an AI training on a large region and are extremely accurate. For instance, they would not consider skin pores as outliers.
But they can still do errors and misclassify a pixel.
To minimize these errors, here is how I proceed.
1/ I render the image at twice the desired resolution, but with a limited number of iteration (around 300-400 depending on the lighting). I generate the image as a png, to avoid lossy compression artifacts (that are basically low pass filters).
2/ I denoise in post process using an
external denoiser. As the image is twice the final size, and that only single pixel outliers are suppressed by the denoiser, even if a pixel is wrongly denoised, its impact on the final image will be very weak. Again, I save the image as png.
There are two main (free) denoisers: intel and nvidia. I prefer intel, because I mostly postprocess on my laptop (without a GPU), but both denoisers are very good (and better than daz integrated denoiser).
3/ I do the downsizing and generate the final webp image.
All is done with command line tools (denoiser or ImageMagick) and to limit all manual operations, I have a script that applies steps 2/ and 3/ to all images in a directory if denoised images are non-existent or older that the rendered ones.
Besides its interest in terms of denoising, having a double sized image is very useful. For instance, if you are not completely happy with the framing and need to do some cropping, or want to do a close-up on an image. And of course, if I need to do some post processing (retoning, darkening, blurring, blending/adding images, etc), I also do that before downsizing.
For animations, I proceed similarly, but I generally reduce the number of iterations (say 250), and I downsize more aggressively (generally x3). If I want to add some video effects (zooming, pan, etc), it is obviously better to do that on the full-scale image. Ditto if you want to retime your frames (for instance for slow motion).
The main drawback of this method is that you can have to keep several large png images on your disk.