So what impacts the rendering speed in iray the most, is the number of lights? Could you explain or show an example, how one would go about lighting a scene, so it would take the least amount of time to render?
It's not the number of lights. To the contrary, done wrong, adding lights can actually increase the render time in some cases.
Think about what iRay does to figure out what color a pixel is. (This is a dramatic over-simplification, but you'll get the basic idea...) It shoots a ray out from the "camera" through that pixel until it hits an object in the scene. So now it has to figure out how much light is coming from that object, at that spot, back to the camera. One thing it does is run a ray from that point to each of the lights in the scene to see if there's a direct path from the light to that point. If so, it can calculate the contribution of that light to that spot. (Light -> surface -> camera)
But then there's the question about reflected light. So from that point, it shoots random rays out until THEY hit a surface, then it figures out how much light is bouncing from that surface, to the original point, and then to the camera. (Light -> surface -> surface -> camera). Then it does another bounce. And another bounce. All that with random vectors, and all that to calculate one pixel. Obviously, you begin to see how this gets complex in a hurry. GPU's thrive on doing this kind of stuff massively parallel, however.
So this is why "total number of lights" isn't necessarily the be-all - every time iRay is evaluating some spot on a surface somewhere, it's going to include trying to see if there's a path from a light to there. So every light you add adds a little more work for iRay. The tradeoff is that if those lights make pixels converge quicker, then that reduces iRay's work. But, for example, if you added a spotlight that was inside a box, and thus couldn't actually illuminate anything in the scene, iRay still keeps trying to get to it bounce after bounce after bounce. Thus, removing things from the scene that you know won't affect the parts of the scene you can see will sometimes help noticeably - iRay understands when rays go off into infinity, and will stop bouncing at that point. So that wall that's behind the camera may be causing iRay a lot of extra work for no effect. (But iRay doesn't know that there might not be a mirror there that will reflect back into the scene, so it has to keep chugging way.)
Now, how does iRay know when to stop fiddling with that pixel it's working on? There are a couple of ways.
- There is a render setting that controls the maximum number of bounces that iRay will consider. (Optimization > Max Path Length.) But it's normally set to -1, which means "infinite".
- At the same time, each time there's a bounce, only a fraction of the total light on the bounce surface is reflected back in the direction iRay cares about. Thus, with each bounce the total contribution is smaller and smaller. As iRay progresses, it will finally decide that there's no point in going any further, because any additional bounces will add such a small delta to that pixel that it won't change visually. At this point, iRay considers the pixel to have "converged" and stops working on it.
Now, consider two surfaces in the scene - one which has a spotlight directly illuminating it, and one that isn't directly illuminated, and so is in some type of shadow. In the first case, it is highly likely that the direct light (spotlight, HDRI, whatever) is going to dominate the final result - the effect of subsequent bounces is going to be significantly smaller than the direct light. So such pixels are going to converge faster, because the direct light dominates. Conversely, a point that is only indirectly lit is (usually) going to be dimmer, and so the bounces are going to be a bigger percentage of the final value, and so iRay is going to need more bounces before it decides to fold up shop and move on. So, those pixels are going to be slower to converge.
So it isn't the total number of lights, but what percent of pixels are directly lit that's going to have the biggest impact.
Also, there are a number of settings on the Render Settings "Progressive Rendering" section that affect this.
- Rending Quality: This defaults to 1.00. (And I usually leave it there.) This is some kind of magic number for iRay that affects how the algorithm decides whether or not a pixel has converged. If you increase its value, iRay will do more work on each pixel before declaring victory.
- Rendering Converged Ratio: This defaults to 95%, although I frequently dial it up to 98%. This represents the total percentage of all the pixels in the image that have to have converged before iRay will decide that it's finished on the picture. Lowering this value will result in some additional "grain" in the image, but will cause iRay to complete faster.
- Max Time (secs). This is the total time iRay will work on the render.
- Max Samples. This is the total number of passes through the entire image iRay will perform. Basically, it doesn't work on one pixel till it's done and then move to the next pixel. Instead, it takes passes through the entire image, doing part of the calculation on any pixel that hasn't converged. Then it does it again. You can think of this as "each time through, it calculates using different random bounces." Again, when iRay reaches this number of samples, it stops, even if it hasn't reached the convergence ratio.
So, the render will complete with the first of "Rendering Converged Ratio," "Max Time," and "Max Samples" is reached.
Thus, these are the kinds of things that give iRay problems:
- Deep shadows. There's very little light there, so each time iRay manages to find a deeply bounced way to find light, that represents a non-trivial fraction of what it's found, so these pixels converge slowly. One way to get around this is to add some light in there (possibly brightening the entire scene) and then using the Tone Mapping settings to darken the image. You've done photography - you'll recognize the settings - ISO Speed, f-Stop and shutter speed. (Except that changing f-stop doesn't change depth of field the same way it does in a real camera - that's a completely different setting.)
- Enclosed rooms. Real rooms have relatively few light sources, and there's a lot of light bouncing going on. Our eyes are REALLY good at adjusting to this, so we don't notice how much dimmer it is than outside. But iRay does. So, again, these scenes tend to converge slowly unless you cheat and get light in there somehow. (Like by removing the ceiling or a wall.)
- Reflective surfaces. These violate the rule that "only a small fraction gets reflected." Put a mirror in the scene, or a chrome bumper, and you've given iRay a lot more work.
But both of the first two really come back to the same thing - direct vs indirect lighting.
Now, in some cases, ghost lights are your friend. There are specific products for this, but in essence, what a ghost light is is a primitive (usually a plane, although it can be differently shaped) that is set to emit light (via the "Emission" properties on its surface) but then has had its cutout opacity dialed back to, like, 0.000001, making it essentially transparent. Thus, the primitive doesn't appear in the render (because it's too transparent), but the light from it does. This is not the same as turning off "render emitter" on a spotlight. The latter means that the spotlight itself won't show up in the render, but if you have a mirror in the scene, you WILL see the spotlight. So turning "render emitter" off essentially hides a spotlight if you're behind the light, but it doesn't really hide the front of the light, if you know what I mean. The flip side is that spotlights can be directional - they have a "cone." Surfaces that emit do so on a 180 degree arc. So, the downside of using large ghost lights, is that all your shadows go away, and, as a photographer, I'm sure you know that shadows (even mild ones) are what give objects in the scene shape and make the scene look realistic, since shadows abound in the real world, even if we don't tend to notice them.
But a small amount of ghost lighting can do wonders for how quickly a dark corner in a room would converge, for example, even if you then got rid of a lot of the light with the Tone Mapping.
Finally, the 4.11 beta that is coming out has an intelligent noise reduction built in. I haven't used it myself, but I'm told this GREATLY improves how long an image can take to render, since it can (effectively) post-process out the "fireflies" that are the result of poor pixel convergence by making use of neighboring pixels. I don't really know how it works, and haven't had a chance to play with it yet, but I've seen forum posts that say that people have been able to dial the total number of samples way back when they use it.
So that's a start on answering your question. But the beauty of iRay, for people that understand photography, is that it tries to work exactly like the real world, so setting up well-lit scenes in iRay is more or less exactly like setting up well-lit scenes for a photo. (I'm talking composition here.) The trick then is just to overcome iRay's quirks in the matter.
Experiment, experiment, experiment! LOL