- Jan 8, 2018
- 366
- 1,211
Yes, you need to select the "golden palace" gens that are parented to g3f, then apply the hair figure/L.I.E. map.good evening
talk about hair I would like to know how to do because I do not walk
is there a way to do it? View attachment 177655
Much better.okay thanks for the input I think it is only fair to show that I listened to your critiques.
Changed her pose since her foot was in his.
Changed his view AND his expression.
Added a few extra lights to add a darker shadow.
added a blue light to deepen the shadow on them
took out that random leaf that caught someone's attention
and added Depth of field to bring attention to the characters
View attachment 177403
Only because you said please three times. My firewall hates zippyshare so I had to trick it into allowing me to grab that filePlease, please, please leave feedback. D:
Sorry about that, but Zippy is the only one I have found that will give me free download figures and stats.Only because you said please three times. My firewall hates zippyshare so I had to trick it into allowing me to grab that file
This is a massive improvement, DOF makes a pretty big difference, his expression is miles ahead of where it previously was and the overall posing is also greatly improved particularly her hand in his hair and his hand on her back, the barely noticeable nose gap made me smile too, nice job.okay thanks for the input I think it is only fair to show that I listened to your critiques.
Changed her pose since her foot was in his.
Changed his view AND his expression.
Added a few extra lights to add a darker shadow.
added a blue light to deepen the shadow on them
took out that random leaf that caught someone's attention
and added Depth of field to bring attention to the characters
View attachment 177403
I got curious and did a test render. I set the render quality to 5x and the convergence to 99%.Something's wrong.
One hour for a background-free render in PNG is a waste of time.
I have 8 GB of RAM
total physical memory = 7.39 gb
physical memory available = 1.88 gb
total virtual memory = 17.4 gb
virtual memory available = 8.51 gb
One hour for a render is normal?
View attachment 177301
Thanks, I was wondering how much of an affect it would have.I got curious and did a test render. I set the render quality to 5x and the convergence to 99%.
This took an hr and a half before I gave up. It only got to 55%. Without the body hair, it takes ~12 min for me. Its the body hair. Interestingly enough, it wasn't the RAM or the Video card; they never went above 5.5GB of usage or more then 25% utilization. The whole render was CPU heavy.You don't have permission to view the spoiler content. Log in or register now.
You could have just link the first 30 min. of Nvidia RTX launch event video.A quality render will always take time which is why I render overnight, while I'm at work or when I have other stuff to do that doesn't involve using my computer. It's a great excuse to step outside or do some cleaning And honestly, anything over 15 minutes has already exceeded my patience for staring at the screen doing nothing anyway so it may as well be 8 hours.
This one took two hours, mostly because I forgot to increase the max render time. It needed another 100 or so iterations to clean up the noise.
You don't have permission to view the spoiler content. Log in or register now.
As far as how many iterations you need and why you have noise... here's the over-simplified exaplantion:
In reality any single point is illuminated by light coming in from an infinite number of angles. That is 100% convergence.
In a CGI render we don't have an infinite amount of time to simulate light from an infinite number of angles so we guess. When we guess incorrectly that pixel is too bright, too dark or the wrong color... aka, noise. The more we guess (iterations) the more likely we are to guess correctly (less noise). The more light sources we have reaching any single point, the more likely we are to guess correctly (less noise). Brighter areas result in more accurate guessing since the correct value is close to the source value for that pixel (less noise).
So, to get the best guess in the least amount of time you need bright light from multiple sources. Incidentally, that is exactly what HDRI scenes give us, light from many angles and usually pretty bright.
The above 2 hour render of a complte scene with two HD actors, about 5,000,000 polygons and roughly 6Gb of materials including multiple reflective surfaces isn't a noisy mess because of the HDRI background providing light from all around the scene and the addition of a "ghost" light overhead to brighten up the foreground of the scene. In fact, most of the noise you'll see on it comes from the image compression when uploading it, not from the render itself.
Hope that helps
Create a new camera, if you haven't already done it, and use that instead of perspective view for frame your render.The eyes should point better at the viewer (any tips)
This is the number one thing you can do to have a better render.Create a new camera, if you haven't already done it, and use that instead of perspective view for frame your render.
Nice looking render.
Personal preference.Nice looking render.
Just one thing, why is something lighting up the "V" on top of their heads?
It's like a bullseye or neon sign drawing your attention right there.
ThanksCreate a new camera, if you haven't already done it, and use that instead of perspective view for frame your render.
Select one at a time the eyes of the figure in the scene pane, go to the parameters tab, click on "None..." where you see "Point at" and select the camera like in the screenshot below
View attachment 177879
If you move the model or the camera after doing this, just save your scene, close DS, reopen the program and reload the scene.
I'm not sure that closing and reopening DS is essential, but I do it anyway.