As Hopes says, Daz really doesn't know what's "visible" and what isn't. Case in point - the camera might be facing the house, so the big tree in the front yard is "behind you" and not "visible." Except that it's between you and the sun, and so is casting a shadow across part of the house. So the effects of items in the scene may be visible, even if they are technically not within the field of view of the camera. That's why there's no attempt to prune things.
Now - if you turn an item "off" in the scene (i.e. click on the little eyeball), then Daz won't include it in the rendering process. Neither the mesh nor any of the textures get used. (Assuming the texture isn't re-used anywhere else.) So if you're looking at the closed front door, you might be able to "turn off," and thus exclude from the rendering process, anything on the inside of the house.
Note that the amount of memory consumed by the mesh, and the effect it has on the render processing is usually small. Texture size tends to dominate the equation, since the total of the mesh is probably in the megabyte range, while the totality of textures in the scene will easily be gigabytes.
The iRay textures on most recent Daz characters and a lot of props are huge. Usually 4K by 4K. Meaning 16 million pixels, or about 48 megabytes after the JPG is expanded into memory. And there are usually a LOT of textures - diffuse maps, normal maps, specular maps, etc., etc., for each and every part of the character. The eyes, for example - 48 megabytes for an eyeball that's going to be how big in the final render? The PA's build the textures that big because, first, somebody might want to do an ultimate close-up of someone's eye rendered at 4K, and, second, because some people believe "texture size is an indication of quality." It doesn't really cost them much to make the textures so big, but it kills us when we try to put 3-4 characters and a house into the scene.
This is why Scene Optimizer helps so much - it allows you to sanely determine which textures should stay larger and which should be trimmed WAY down, because you know which ones are close to the camera, which are far away, which are in focus and which are going to be blurred by depth-of-field anyway, and so don't need to be high-res.