3D Software Help and Assistance. Ask Away.

5.00 star(s) 1 Vote

Lewd4D

Member
Jun 28, 2019
184
3,965
EDIT: OK problem solved and my god am i blind. Thin walled was activated on the face texture and it seems like that setting doesn't get copied from other surfaces. thx everyone for the suggestions. ;)
______________________________________________________________________________________________



dont know the exact cause but there is a shell of color that isnt changing with the shape of the face contours, you can see very close to the eye it sits correctly, but it's sitting outside of the surface texture on the rest of the face including over the edge of the hair. do you have a specific characters HD applied? what are you using the change the shape of the face?

(cute by the way, I like brunettes)
I made a base G8 figure and copied all the surfaces. Well the pictures says it all...:rolleyes: wut?
test.jpg


Going to sound a little weird, however check that you don't have a light source inside the characters head. If you have a lighting plane that clips through the head geometry you will get this effect. Nice character btw.
Good idea but no the picture is only lit with HDRI.
 
Last edited:

tgellen

Newbie
Jan 25, 2020
27
19
I'm a total nOOb when it comes to this stuff...

I think I applied an undressing morph to clothing that did not have that morph and ever since, anytime I pose a figure or try to apply body shapes they come out distorted.

My question is how can I tell what I did and is there a way to undo it?

This is happening to all of my females, this doesn't happen with the male figures.

And it only shows up when the legs are bent past a certain point.

I also included one of my good renders as a baseline of my skills (pretty new)
 

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,468
4,446
I think I applied an undressing morph to clothing that did not have that morph and ever since, anytime I pose a figure or try to apply body shapes they come out distorted.

My question is how can I tell what I did and is there a way to undo it?

This is happening to all of my females, this doesn't happen with the male figures.

And it only shows up when the legs are bent past a certain point.
Daz is a very impressive product given the tiny size of their dev team, but bug-free it is not. It appears that in some cases a hidden morph can get "stuck" even for new base figures.

Sometimes you can find it by turning on "display hidden" in the properties pane. But the problem is not always exposed. Other people who encounter the issue track it down by temporarily moving "morph" add on content out of the relevant folders, stopping and restating Daz, and test test test until you find the one causing the issue. Using a "bisection search" ( ) strategy can make this go faster if you have a lot of content.

Helpful threads from the Daz forums:
(i found these by searching for "daz3d all female characters now have a strange body distortion". There's lots of threads related as it is a reasonably common issue)
 
  • Like
Reactions: tgellen

TheFullPickle

Newbie
Jun 20, 2019
17
68
Does anybody know how to make a circumcised penis with futalicious? Trying to hide the foreskin by putting it all the way up just makes it look weird.
 

tgellen

Newbie
Jan 25, 2020
27
19
It appears that in some cases a hidden morph can get "stuck" even for new base figures.
I solved it by doing two things, I manually deleted all the Big Girl files and I uninstalled (through the IM) all my G8 specific items and every dForce item. I reinstalled the G8 starter and essentials and everything looks correct. I'll slowly add the rest back and hopefully everything will stay normal. I'm fairly certain it was a dForce morph I applied incorrectly that started it all. (in my case) Thanks for all the suggestions and links.
 
  • Like
Reactions: osanaiko

ddeadbeat

Bewbie
Game Developer
Mar 30, 2018
143
1,646
A question partly about Daz, partly about "postprocessing". Up until now I thought that 95% convergence is absolutely enough for render to be clean/clear/sharp. But I guess I was wrong - I started to see fireflies and noise in dark areas of the image even at 95% convergence. It means that I fucked up somewhere with lightning in the scene, doesn't it? Furthermore, after I did some simple processing in PS, the image became even more noisy.
What do you think is lesser evil (in the final game): a bit noisier image with slightly more details, or vice versa?
What's yours guys approach to editing renders?
Also, I'm using Nvidia denoiser: when to use it, before any other editing or after?

And here are 4 images:
1. Plain image w/o denoiser
2. Plain image w/ denoiser
3. Edited image w/o denoiser
4. Edited image w/ denoiser
Which image you think has better look/quality?

113_doc_leave1.jpg n_113_doc_leave1.jpg p113_doc_leave1.jpg pn_113_doc_leave1.jpg
 
  • Like
Reactions: osanaiko

Xavster

Well-Known Member
Game Developer
Mar 27, 2018
1,249
7,622
A question partly about Daz, partly about "postprocessing". Up until now I thought that 95% convergence is absolutely enough for render to be clean/clear/sharp. But I guess I was wrong - I started to see fireflies and noise in dark areas of the image even at 95% convergence. It means that I fucked up somewhere with lightning in the scene, doesn't it? Furthermore, after I did some simple processing in PS, the image became even more noisy.
What do you think is lesser evil (in the final game): a bit noisier image with slightly more details, or vice versa?
What's yours guys approach to editing renders?
Also, I'm using Nvidia denoiser: when to use it, before any other editing or after?

And here are 4 images:
1. Plain image w/o denoiser
2. Plain image w/ denoiser
3. Edited image w/o denoiser
4. Edited image w/ denoiser
Which image you think has better look/quality?

View attachment 648978 View attachment 648979 View attachment 648980 View attachment 648981
First thing you need to understand is what convergence means. In a few words it means that the last iteration did stuff all to change the current image. To be honest, it is a horrific way to measure how well a render has gone.

If you want to produce good images on mass, just ignore the Daz evaluations. Learn what is required from lighting to get scene converge and live by those rules. Look at the results of your test renders and tweak to optimise your final results. In my VN, I have been rendering images of late which include both geometric and lighting complexity. It's a result of the character design and not a function of the final result. I have been rendering with highly complex emissive sources and to admit, I prefer very simple HDRI scene lighting. Faster to converge and way more natural.

To leave you with a key tip, understand the contribution of every light source. If you don't understand it's contribution to a scene, remove it. There is nothing wrong with a single spotlight / emissive source / HDRI.

EDIT: If you are rendering at final resolution, you need to get away with less than 1000 iterations. I have recently acquired more hardware which allows me to get away with sub optimal scene creation. However with the next revision I desire to render animation frames in less that 100 iterations (2 minutes) so rendering convergence is at a paramount.
 
Last edited:

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,468
4,446
A question partly about Daz, partly about "postprocessing". Up until now I thought that 95% convergence is absolutely enough for render to be clean/clear/sharp. But I guess I was wrong - I started to see fireflies and noise in dark areas of the image even at 95% convergence. It means that I fucked up somewhere with lightning in the scene, doesn't it? Furthermore, after I did some simple processing in PS, the image became even more noisy.
What do you think is lesser evil (in the final game): a bit noisier image with slightly more details, or vice versa?
What's yours guys approach to editing renders?
Also, I'm using Nvidia denoiser: when to use it, before any other editing or after?
Grain: you already know part of the answer, it's largely about the amount of light that iray has to work with over the number of samples you are allowing.

I've seen several experienced users comment in the daz official forums to say they don't use the "% convergence" measure anymore - they just pick an arbitrary high number of samples like 1000 or 1500 and render until it is reached. That is the method i am using now.

If you still have grain especially in the darker areas:

- add more light that shines directly into the grainy areas (use a dark blue color in your light to illuminate "shadow" areas if you need that effect)

- if you can, use a HDRI for lighting (and the Iray Indoor Camera tool you can find on these forums) because this will allow light to come from all directions behind and around your camera toward your subject, without the speed and "visible light hotspots" or reflections issues that can occur if you have a lot of point lights/spots or emissive objects (like iray ghost lights kit)

- render at 2x dimensions (4x total pixels) and then resize by 0.5 back to your target size. When you resize, the new smaller image's pixels are the average of 4 pixels in large render, causing grain (which is actually the the noticable color differences between neighbour pixelss) to get smoothed away without losing fine detail. Note that this technique often lets you run for many fewer samples (like only 600-1000) so the total render time impact is not so bad.

If you try all of these things but just cannot get the effect you want, then look into using Batch Render tool and run renders at high number of samples which take 1-2 hours each overnight to make your final renders. You can progress fast with shittng 10% converged renders while making your scenes and character poses, then come back and do final renders later once everything is setup correctly. Consider that if later you find things you want to change about the look or clothes or anything in your scene, you would have to re-render it anyway. So it can make sense to leave the "final" render until you are happy with all your details.

Finally, i think that Post processing and editing in photoshop etc can let you add a lot of nuance to the render images, and it's how some of the best looking games get their unique "look". But it is heavy work if you need to manually do it for every render.

Good luck!
 

ddeadbeat

Bewbie
Game Developer
Mar 30, 2018
143
1,646
First thing you need to understand is what convergence means. In a few words it means that the last iteration did stuff all to change the current image. To be honest, it is a horrific way to measure how well a render has gone.
I've read even your explanation several times, but still didn't fully understand what convergence is :D
Maybe you could rephrase it some other way?
EDIT: Okay, I think(?) I started to understand something about this word. If I set it to 95%, it would mean that if pixels in say iteration 999 and iteration 1000 are the same for 95%, rendering will stop. Something like that? And overall image quality might still be bad, but desired convergence is achieved. Eh?
But I got your point that it's a bad way to measure final image quality.
Learn what is required from lighting to get scene converge and live by those rules.
But what is required? Atm I just understand it as "if the scene looks lit well enough for my eyes and adequately to the place, it's good to go". But even if it is not, instead of adding more light with for example ghost lights, I can just bump up exposure value, or F/stop, or ISO, or shutter speed, etc, in tone mapping tab. I don't quite understand the difference.
If you are rendering at final resolution, you need to get away with less than 1000 iterations.
Okay, so. Here's my example. I have indoors scene, daytime. I have HDRI lightning, I use Iray Indoor camera and sometimes regular daz camera for POVs, since Iray Indoor camera can't have focal length less than 60mm for some reason (the latter requires some tweaking with lights, removing ceiling and/or walls, and mentioned tone mapping). My tone mapping settings are: Exposure value 12.75, F/stop 8, shutter speed ~100, ISO 100, other sliders untouched, if I remember correctly. I also blindly added Nominal Luminance in Filtering tab, just because I read about it somewhere here, set it to 1000 or 10000 (also, I don't see any difference at all with it). Also I have 1-3 simple sphere ghost lights around the character. They have emission value about 4000 to 20000 depending on the scene. I render to final resoultion which is 1080p 16:9.
I read about rendering in 4k and than downsampling to 1080 to get same quality or better, faster, but I usually do batch rendering overnight, so I thought - why bother.
Since before your reply I rendered with convergence reaching method, 95% convergence was reached after 1500-4000 iterations. If you are saying that I should reach good quality in less than 1000 and I don't - what does it mean, what should I do? The obvious answer for me for now is: add more light! But that the scene would look overly exposed. Do I need to add more light into the scene and then reduce stuff such as Exposure value etc? And it should lead to faster... c-converging... better converging? I really still don't understand the meaning of that word.

- add more light that shines directly into the grainy areas (use a dark blue color in your light to illuminate "shadow" areas if you need that effect)
I actually do that, again with ghost lights, but I thought that's, like, unprofessional? Because since it's a game where you click through the images, and if in 50 images in the same room the lightning changes 10 times - it's kinda strage if not annoying.

Sorry for a wall of text and thank you guys for already quite detailed answers!
 
Last edited:

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,468
4,446
To clarify what is meant by "converge".

The way raytracers work is they create a bunch of virtual light rays from the camera. These rays are pointed to go through every pixel of the image. They hit object surfaces and at that point the color (both hue and brightness) are calculated based on simulating the light sources in the image bouncing off that object surface at the point of the pixel and into the camera. There's many factors that can change the hue and brightness: the actual color of the surface material (hue), the color of the light(s) hitting that surface, any other ways the surface changes the color (surface decals, transparent coatings, sub surface scattered light etc etc), the angle of the light rays compared to the local surface angle (both macro angle and bumps/normals on top of that effect the angle). There's more and more ways complex render engines will add to this but the details don't matter (until they do lol)

It would take impossibly long to perfectly calculate the result. So the raytracers use some randomisation in the exact origin of the rays (e.g. the virtual camera aperture is not a point, so the precise angle of the ray can vary across the virtual "lens") so you get a variety of different calculated result values for the final pixel color RGB.

Convergence the average "spread" of variance of the result color of the pixels across the whole image.

High convergence means there has been enough multiple random rays shot out that the confidence in the correct color for the pixels is high.

Lower convergence means there is more "random" variation in the colors.

Another way to look at this is how much the color of neighbouring pixels vary i.e. show a speckled "grain", when "realistically" there should be a slight smooth transisition in color along a gradient.
 

Honey hunters

Member
Jan 23, 2020
118
26
Can anyone help me how to write a simple gallery code which should unlock by player , I search too many forms but I can't understand .
 

Darx239

Newbie
Sep 26, 2017
43
71
MoneyShotz assets cant's be loaded to any scene or character don't know why i hope someone knows where to find the missing files or how to fix this SharedScreenshot.jpg
 

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,468
4,446
Can anyone help me how to write a simple gallery code which should unlock by player , I search too many forms but I can't understand .
This post in the Lemmasoft renpy forums is one of the easiest to understand:



Another way would be to find a game that already has a gallery and view/copy that code (you might need to "UnRen" the RPA file to get the original code out of the archive.)

If it's still too hard for you, it's probably time to learn some programming, or find a project partner who can do it for you.
 

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,468
4,446
MoneyShotz assets cant's be loaded to any scene or character don't know why i hope someone knows where to find the missing files or how to fix this
You haven't installed it correctly.

Look in the archive again and follow the instructions correctly. The folders into which you copy the file matter!
 

SvenVlad

Well-Known Member
Modder
Aug 11, 2017
1,885
8,969
Does anyone know the asset's name of this short?


1589179193788.png 1589179150914.png

Sadly it's not the page os these images


 

osanaiko

Engaged Member
Modder
Jul 4, 2017
2,468
4,446
I like that too bad it's gen 3
The RSSY Clothing Converters from the RiverSoft/SickleYield collaboration generally work pretty well. I've succesfully been able to remap lots of clothing from G3 to G8. I'd recommend you give it a try.

The RSSY stuff is some of the best value items in the Daz ecosystem as it lets you reuse content you purchased for older Gens on the latest figures - and this mean potentially taking advantage of the great discounted sales on older Gen's products.

Of course if you are a Salty Sea Dog Buccaneer who doesn't actually support the creators, it's just as useful.
 

Xavster

Well-Known Member
Game Developer
Mar 27, 2018
1,249
7,622
I've read even your explanation several times, but still didn't fully understand what convergence is :D
Maybe you could rephrase it some other way?
EDIT: Okay, I think(?) I started to understand something about this word. If I set it to 95%, it would mean that if pixels in say iteration 999 and iteration 1000 are the same for 95%, rendering will stop. Something like that? And overall image quality might still be bad, but desired convergence is achieved. Eh?
But I got your point that it's a bad way to measure final image quality.

But what is required? Atm I just understand it as "if the scene looks lit well enough for my eyes and adequately to the place, it's good to go". But even if it is not, instead of adding more light with for example ghost lights, I can just bump up exposure value, or F/stop, or ISO, or shutter speed, etc, in tone mapping tab. I don't quite understand the difference.
Convergence from my understanding just means that the successive iterations aren't changing the resultant image. It doesn't necessarily mean that images with high convergence don't have issues.

The main trick with scene lighting is to make it as simple as possible for the rendering engine to process and arrive at a good result. HDRI's are extremely good for fast convergence as all of source light rays traced are directional and low intensity. Hence the final result is primarily a result of light bouncing off the props and straight into the camera. As soon as you add additional lights in the scene, the resultant light to the camera can be from multiple sources. Worst of all are high intensity lights, as when light happens to reach the camera from this source via a single bounce it will show up as a very bright pixel. The rendering engine also has to run a large number of iterations to evaluate what the final image result should be. Most of the lighting in my VN is from a HDRI and typically a maximum of 1 emissive sphere. This emissive sphere is equivalent to a ghost light, however I usually make it larger and lower intensity than a ghost light. Creating your own ghost light is incredibly simple and also gives you far better control. Also note that when you are using a HDRI for interior lighting you will need rotate the dome when the camera changes direction.

Sometimes this goes against what you are trying to achieve with that specific image and you need to have more complex lighting with a higher number of iterations. As an example a night time scene with lighting coming from street lights / cars etc. Also if you have a highly reflective surface you effectively double the light sources which causes problems for convergence. One of my characters also has emissive skin, which causes real problems rendering. Animation frames in which she is included take about 10 times longer to render as I have to bump up the iterations and each iteration takes longer.

There are other ways of lighting a scene for rapid convergence, such as 3-point lighting. The trick however is to make sure that light sources do not compete with each other. In the 3-point lighting system used for portraits, different portions of the face are lit by different lights. Similarly for HDRI + emissive you are better to have the emissive as a prop such as a light on a table or wall light. In the local vicinity it will dominate the lighting, however the HDRI will take over everywhere else.

Okay, so. Here's my example. I have indoors scene, daytime. I have HDRI lightning, I use Iray Indoor camera and sometimes regular daz camera for POVs, since Iray Indoor camera can't have focal length less than 60mm for some reason (the latter requires some tweaking with lights, removing ceiling and/or walls, and mentioned tone mapping). My tone mapping settings are: Exposure value 12.75, F/stop 8, shutter speed ~100, ISO 100, other sliders untouched, if I remember correctly. I also blindly added Nominal Luminance in Filtering tab, just because I read about it somewhere here, set it to 1000 or 10000 (also, I don't see any difference at all with it). Also I have 1-3 simple sphere ghost lights around the character. They have emission value about 4000 to 20000 depending on the scene. I render to final resoultion which is 1080p 16:9.
I read about rendering in 4k and than downsampling to 1080 to get same quality or better, faster, but I usually do batch rendering overnight, so I thought - why bother.
Since before your reply I rendered with convergence reaching method, 95% convergence was reached after 1500-4000 iterations. If you are saying that I should reach good quality in less than 1000 and I don't - what does it mean, what should I do? The obvious answer for me for now is: add more light! But that the scene would look overly exposed. Do I need to add more light into the scene and then reduce stuff such as Exposure value etc? And it should lead to faster... c-converging... better converging? I really still don't understand the meaning of that word.
You can play with the camera properties on the iRay camera, just as you do with any other camera. You just need to unlock the parameters. The only thing you need to be wary of with close ups is that the lighting panes related to the camera do not clip through objects. This will have the effect of internally lighting the object. To correct you just need to move out the offending plane.

As far as you problems with convergence it is actually being caused by the ghost lights. By having several competing with each other you keep getting different results at the camera. If you want it to converge rapidly, just use the HDRI and no emissives. If you want a highlight on the character then use a single 'ghost light' but make it larger with lower intensity. If you double the diameter of the ghost light and drop the intensity by 4 you get the exact same amount of light, however the scene will converge more rapidly.

I actually do that, again with ghost lights, but I thought that's, like, unprofessional? Because since it's a game where you click through the images, and if in 50 images in the same room the lightning changes 10 times - it's kinda strage if not annoying.

Sorry for a wall of text and thank you guys for already quite detailed answers!
For each room you need to set up the lighting to be consistent, even when you change the angle. What I would suggest is that you use a single emissive sphere in the center room (about 1m diameter toward ceiling) in conjunction with a HDRI and iRay interior camera. As you change the rotation (side to side) of the camera you need to also change the HDRI dome rotation. If you want hard shadows, bump up the emissive sphere intensity and lower the environment intensity. Conversely if you want a softer look, wind up the environment intensity and lower sphere intensity. Also note that the HDRI you use should be one designed for scene lighting rather than a background. I tend to use one particular HDRI frequently as it has a combination of brown at the base, blue in upper portion and a hard white light as well. Hence when lighting a character it's like having the brown coming from the wooden floors, blue from the walls / roof and the white from the main interior light.
 
5.00 star(s) 1 Vote