- Oct 17, 2018
- 172
- 874
Well sad story - I only can try the #1 and I will do so. For the rest I already do the overscales. rendering in 3200 and downscalng it to about 1200 (since thats the resoluten for my project anyways). In the end it still is my hardware (as I mentioned it before a few days ago. :/) nvidia haha, would be nice. I´m running a toaster with onboard grafics. xD Well, that doesn´t prevent me from trying and learning or even scripting. Time will come when I have my hardware in order. But yes, as you said, #1 seems very importand. I thought it only renders what is shown in the pirctureframe while rendering, but it seems it takes a lot longer with stuff behind the scenes that are not even on the render. (have a whole apartment there.) *sights* allright have to cut that down then. Does it do the trick if I blend it out (klick on the eyeicon to hide probs ect.) or do I have to remove everything thats not needed? because removing everything and adding it again and again when needed, would be a pain in the Butt.#1, simplify scenes. Don't use more stuff than you need. It's why I composite so many shots. The backgrounds are largely irrelevant. Reflectivity, translucence? These are going to take time for Iray to calculate, so throw it out if it's not important.
#2, use oversampling and anti-aliasing. Render above your target resolution, then scale down. Image resolution is about getting synthetic light through an aperture at a sufficient sample size in order to resolve an image. The biggest part of the sample will be between -1 and +1 standard deviation from the mean, and that's the best part for your engine. The tail ends (90-100% convergence, for instance)? That's a lot more time and less efficient on the engine.
By oversampling and downscaling, you're actually doing the same work but more efficiently. The NVidia guys figured that out (I don't have all the maths). You're working more of the -/+1 standard deviation range for the image resolution that you actually want, and the downscaling creates more convergence by taking the data around each pixel and mashing them together.
(Before we had software denoising, this is what we did. That was denoising before we had denoising.)
#3 use a software denoiser, like Intel's FOSS denoiser. That uses artificial intelligence to re-interpret noise pixels (grain) and create a whole image from it.
I do all 3 of those to save me some render time. Simple scenes, oversampled, and denoised as needed.