Ren'Py Layered image leaves visible border: Help needed

TheGrinReaper

Member
Feb 21, 2022
148
2,557
I am learning to how to use layered images for game in renpy. I created a render with top clothe and another one without top clothe. I snipped selection without top and used in layered image but I can see line of snip in final image: (SS below)
topless.png full.png

Any idea from senior devs on how to make this not happen???
 

TheGrinReaper

Member
Feb 21, 2022
148
2,557
Do not include the snip line when you save the image ?

I mean, layered images just show what you ask it to show. This mean that the issue come from the image itself, whatever can be the reason for this.
I don't think I am saving the snip line. What I did was this :
1. Render one image with top / shirt
2. Render same pose but with shirt hidden. Every other setting same
3. Open topless image in PS .... Use lasso tool to select topless body/face area.
4. Invert selection. Delete everything else.
5. Save as png
6. Use png as attribute in layered image and use age with top as base image in renpy code.

Another info:
Using the topless snip png as a layer on top of clothed base image in PS does not have this border marking. Looks perfect in ps...

Another thing is that the 2 images is of higher res and I am resizing them to 1920x1080 using im.Scale() method in renpy...

Any idea if any of the above can b a issue?

Much Thanks!!
 

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
10,862
15,996
I don't think I am saving the snip line.
And as I said, Layered images add nothing. If it's present when you show the image in Ren'Py, then it's present in the image itself. Whatever "it" can be ; and it can be many things.

One thing I noted is that you cut really far from the model itself, what by itself is wrong. Not only you should cut as near as possible, but you should also add shading alpha channel around the model for it to fit more naturally in the background.

This being said... Since you include a part of the background in the sprite, why do you even use sprites and layered images ?
If it's just because you've two versions of the image, then something like this:
Python:
default imageState = "clothed"

label whatever:
    show expression "images/mc_idle_{}.jpg".format( imageState )
    "MC removed his shirt"
    $ imageState = "shirtless"
    girl "What the fuck ! Where do you think you are ? Put back your shirt !"
    $ imageState = "clothed"
    MC "Better ?"

    "END"
With two images, named "images/mc_idle_clothed.jpg" and "images/mc_idle_shirtless.jpg", would do exactly the same, without the burden to define layered images, nor the post processing.
 
  • Like
Reactions: TheGrinReaper

79flavors

Well-Known Member
Respected User
Jun 14, 2018
1,607
2,254
My usual approach when doing anything with layers is to create a series of (normally hidden) layers of pure color. Red, Green, White and Black.

As you do stuff, occasionally show the most contrasting colored layers as the background - to make sure you aren't leaving artifacts around the edges.

I think what you are saying is that your baseline image is the clothed image.
Then you have a naked torso .PNG file which is sometimes overlaid on top. When the two pictures are shown together, you can see the outline.

That seems to imply that the outline is part of the shirtless .PNG file.
It doesn't look like a misalignment problem - as the white wall would still look the same, even if the image were a pixel or two too far left/right/up/down... plus you'd see that sort of mistake elsewhere within the picture.

Perhaps you could post a reply with both images attached. That way, we could download them and see if we see the same results.

Another thing to try would be to (temporarily) display both images in a different way.

Python:
scene clothed_body
show unclothed_torso

This too would show both images, with the "naked" version on top. If things behave differently, it could be problem with the layeredimage statements. But if that were the case, I'm sure someone else would have come across it by now.

If you can attach the images, it would make it a lot easier for us to diagnose.
 
  • Like
Reactions: TheGrinReaper

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Donor
Respected User
Jun 10, 2017
10,862
15,996
That seems to imply that the outline is part of the shirtless .PNG file.
It doesn't look like a misalignment problem - as the white wall would still look the same, even if the image were a pixel or two too far left/right/up/down... plus you'd see that sort of mistake elsewhere within the picture.
At first, I thought about a combination between anti-aliasing and resizing.
The anti-aliasing would come from Photoshop, that wouldn't cut abruptly but try to soften the transition between the part kept and the expected future background. It wouldn't be pure anti-aliasing but probably, like I said above, a shading on the alpha channel to make the image blend smoothly with its background.
Then when he resize the image in Ren'Py, and when Ren'Py resize it itself, this would compress the anti-aliased-like part, creating a weird effect.
But I stopped my thoughts there. The pixels are expected to have the same color and only the alpha channel would be mixed, what wouldn't change the said color ; especially when put on top of a background that also have the same color. When you put white on top of white, you still get white, whatever the alpha channel value.

Then I read you write this and was wondering: Why do we assume that the images are PNG ?
Of course, the ones he posted are. But PNG is the best way to ensure that the problem will not disappear due to the compression.

Therefore, what if the images he use in the game are in fact JPEG, and so do not have alpha channel ?
The anti-aliased-like part would be then replaced by a shade of white when the image is saved. And later, when Ren'Py do the resizing, the mix of two, or more (we also don't know the original resolution), pixels would lead to a greyest line, exactly like we see in his screenshots.

So, I took another look at the shirtless image, this time regarding at the right part of it. If it was purely due to the use of layered images, the line would be constant on both side. Less visible on the right due to its color, but still perfectly constant. And the fact is that it's not constant.
If you follow the process describe above, on this side you also get shades of grey (since the base color is grey). But when you mix two, or more, pixels together, you get a darker grey that, half of the time, blend naturally with the background, and half of the time is a bit darker, what permit to see the presence of the seam.

So, yeah, I guess that the problem is this one.
 
  • Like
Reactions: TheGrinReaper

TheGrinReaper

Member
Feb 21, 2022
148
2,557
At first, I thought about a combination between anti-aliasing and resizing.
The anti-aliasing would come from Photoshop, that wouldn't cut abruptly but try to soften the transition between the part kept and the expected future background. It wouldn't be pure anti-aliasing but probably, like I said above, a shading on the alpha channel to make the image blend smoothly with its background.
Then when he resize the image in Ren'Py, and when Ren'Py resize it itself, this would compress the anti-aliased-like part, creating a weird effect.
But I stopped my thoughts there. The pixels are expected to have the same color and only the alpha channel would be mixed, what wouldn't change the said color ; especially when put on top of a background that also have the same color. When you put white on top of white, you still get white, whatever the alpha channel value.

Then I read you write this and was wondering: Why do we assume that the images are PNG ?
Of course, the ones he posted are. But PNG is the best way to ensure that the problem will not disappear due to the compression.

Therefore, what if the images he use in the game are in fact JPEG, and so do not have alpha channel ?
The anti-aliased-like part would be then replaced by a shade of white when the image is saved. And later, when Ren'Py do the resizing, the mix of two, or more (we also don't know the original resolution), pixels would lead to a greyest line, exactly like we see in his screenshots.

So, I took another look at the shirtless image, this time regarding at the right part of it. If it was purely due to the use of layered images, the line would be constant on both side. Less visible on the right due to its color, but still perfectly constant. And the fact is that it's not constant.
If you follow the process describe above, on this side you also get shades of grey (since the base color is grey). But when you mix two, or more, pixels together, you get a darker grey that, half of the time, blend naturally with the background, and half of the time is a bit darker, what permit to see the presence of the seam.

So, yeah, I guess that the problem is this one.
Thanks a lot for your detail reply.. I did not understand some things like anti aliasing and alpha, but I will try and read about it. My system where I was rendering is not working and have taken for repair. Once I get it back (2-3 days) I will try to see if I can share the exact files I am using and also my block of code here.
Again, thanks for your time and advise!!
 

TheGrinReaper

Member
Feb 21, 2022
148
2,557
My usual approach when doing anything with layers is to create a series of (normally hidden) layers of pure color. Red, Green, White and Black.

As you do stuff, occasionally show the most contrasting colored layers as the background - to make sure you aren't leaving artifacts around the edges.

I think what you are saying is that your baseline image is the clothed image.
Then you have a naked torso .PNG file which is sometimes overlaid on top. When the two pictures are shown together, you can see the outline.

That seems to imply that the outline is part of the shirtless .PNG file.
It doesn't look like a misalignment problem - as the white wall would still look the same, even if the image were a pixel or two too far left/right/up/down... plus you'd see that sort of mistake elsewhere within the picture.

Perhaps you could post a reply with both images attached. That way, we could download them and see if we see the same results.

Another thing to try would be to (temporarily) display both images in a different way.

Python:
scene clothed_body
show unclothed_torso

This too would show both images, with the "naked" version on top. If things behave differently, it could be problem with the layeredimage statements. But if that were the case, I'm sure someone else would have come across it by now.

If you can attach the images, it would make it a lot easier for us to diagnose.
Thank you for reply... My system is gone for repair. Once I have it with me, I will post original imags and lines of code I am using.... Thank you!!
 
  • Like
Reactions: 79flavors

79flavors

Well-Known Member
Respected User
Jun 14, 2018
1,607
2,254
I did not understand some things like anti aliasing and alpha

Anti-aliasing is where graphics swap the absolute correct color for a pixel with slightly different color so it blends in better with the pixels around it.

Imagine a black and white pair of triangles next to each other along their diagonal.
In absolute terms, each pixel should be either black or white. But that looks jagged.
Anti-aliasing will change some of the pixels along the boundary between the two areas to varying shades of gray.
The end result is that instead of looking like a saw-tooth of either black or white - the boundary looks more like a straight line.
(at least to a human being).

pA7uy[1].png


Alpha is just another way of describing transparency.

In the same way as RGB describes how much Red, Green and Blue an individual pixel should have in it (0-255), Alpha describes how visible that pixel is from 0 (completely clear) to 255 (completely opaque). A pixel of alpha=128 will be 50% transparent. It really only matters when you are overlapping images, where how much of the image below is shown is dependent upon the alpha of the boundary image/pixels above it.

It kind of overlaps with how anti-aliasing works. Imagine instead of black and white... black and green. All those gray pixels would be shades of gray'y green. It can be achieved by varying the alpha levels of those pure black pixels so the underlying color bleeds through (of course, the math is way more complicated than that).

I think Anne was leading toward PS perhaps anti-aliasing the edges of your cut-out image. But instead of varying the alpha channel of the individual pixels - it was leaving varying shades of gray - which might be showing as that outline. But as a white image against a white background - it's unclear if it would work that way.

But yeah... the actual images when you get your computer back will remove our need to speculate.
 
Last edited:
  • Like
Reactions: TheGrinReaper

redle

Active Member
Apr 12, 2017
618
1,080
First, rendering isn't a perfect construct. That's why people end up needing tools like denoisers. What this means is that rendering twice (especially when the scene isn't 100% identical) can result in variations even in areas that you expect not to change (like a wall).

Second, that line is showing up no where close to where the actual item you are trimming is. If you trim it on actual borders of items then even if variation occurs it will not be anywhere near so obvious as when you have a very rough jagged line wiggling through a white wall.

Third, do your trimming in the other direction. You are using the "bigger" version as your base and overlaying something smaller (man without clothes is smaller than man with clothes). This means you must include something in the overlay that shouldn't even be part of the overlay. You must include part of the wall to cover up the clothing edges which are not part of the zone the shirtless man will cover.

Fourth, why even do manual trimming? You can render the full scene with the shirtless man, and then turn off the wall and rerender with a shirt and the render will already be ready for overlay without needing to do any trimming. The render will be made with transparency built-in where there were no objects to show.

Fifth, jpgs are generally much smaller than pngs in terms of disk usage. The gain for saving "less" of an image using transparency is often lost by needing to save in a format that allows for transparency at all. What I mean is, simply using the 2 original full-scene images you started with often use less/same amount of disk space and time within the game as using 1 full scene and 1 alpha-masked partial. Be sure you have a solid reason why doing the extra work of making the overlay is actually beneficial before you commit to the extra work.

Sixth, images stored with transparent images need to define the transparent zones. There is more than one way to store that information. But as was mentioned before in regards to anti-aliasing, most likely those lines are showing up because of the scaling being performed. An image is made up of distinct pixels. To scale an image all the original pixel descriptions basically become invalid. Seven pixels wide becomes 5 pixels (or whatever the scaling factor is). The point is, algorithms must be run to calculate a best approximation of every new pixel in the resulting image. This can result in slight variations. Not only is this true of the pixel colors, but it is also true of the transparent zones. And depending on the algorithms, formats, and scaling factors used, it is possible to get new pixels that have a slight variation in "how much" transparency is needed. This can result in using a small percentage of the alpha-mask-color/background-defined-color as part of the pixel color calculation.

Seventh, people don't want to pay the bandwidth, disk space, and loading/calculation costs associated with higher resolution images if what they get to see is a lower resolution image anyway. You may well have reasons why you want to do it, but I'm having a hard time coming up with what they might be. If you are always going to scale it smaller before it is made visible, scale it outside of renpy and distribute the already scaled images.

Eighth, I'll mention that you can use a blend transition zone to clean up hard lines like that if you can't find another way to suit your needs. Basically, don't have any hard border on your overlay. Have your overlay go out as far as the overlay is mandatory and from that point use a gradient to slowly transition out to full transparency rather than doing it all at once.
 
  • Like
Reactions: TheGrinReaper

TheGrinReaper

Member
Feb 21, 2022
148
2,557
I got my system back today (so costly, the repair :eek:) and I tried the images again. This time I also tried rendering an topless image (top layer in layered image) as 1920x1080 so I wouldn't have to scale it in renpy and this time it didn't show any outline.

So it was definitely being caused by rescaling. Probably like redle mentioned, the scaling down was messing with corner pixels between transparent and non-transparent of image.

BIG Thank you anne O'nymous , 79flavors and redle for advice.

For others reading this post later... my ISSUE IS SOLVED

Got SOLVED by not rendering top image as higher resolution and using im.Scale() to bring it down. Just rendered in resolution of screen in game and got perfect result.

1654977320080.png
 
Last edited:

TheGrinReaper

Member
Feb 21, 2022
148
2,557
First, rendering isn't a perfect construct. That's why people end up needing tools like denoisers. What this means is that rendering twice (especially when the scene isn't 100% identical) can result in variations even in areas that you expect not to change (like a wall).

Second, that line is showing up no where close to where the actual item you are trimming is. If you trim it on actual borders of items then even if variation occurs it will not be anywhere near so obvious as when you have a very rough jagged line wiggling through a white wall.

Third, do your trimming in the other direction. You are using the "bigger" version as your base and overlaying something smaller (man without clothes is smaller than man with clothes). This means you must include something in the overlay that shouldn't even be part of the overlay. You must include part of the wall to cover up the clothing edges which are not part of the zone the shirtless man will cover.

Fourth, why even do manual trimming? You can render the full scene with the shirtless man, and then turn off the wall and rerender with a shirt and the render will already be ready for overlay without needing to do any trimming. The render will be made with transparency built-in where there were no objects to show.

Fifth, jpgs are generally much smaller than pngs in terms of disk usage. The gain for saving "less" of an image using transparency is often lost by needing to save in a format that allows for transparency at all. What I mean is, simply using the 2 original full-scene images you started with often use less/same amount of disk space and time within the game as using 1 full scene and 1 alpha-masked partial. Be sure you have a solid reason why doing the extra work of making the overlay is actually beneficial before you commit to the extra work.

Sixth, images stored with transparent images need to define the transparent zones. There is more than one way to store that information. But as was mentioned before in regards to anti-aliasing, most likely those lines are showing up because of the scaling being performed. An image is made up of distinct pixels. To scale an image all the original pixel descriptions basically become invalid. Seven pixels wide becomes 5 pixels (or whatever the scaling factor is). The point is, algorithms must be run to calculate a best approximation of every new pixel in the resulting image. This can result in slight variations. Not only is this true of the pixel colors, but it is also true of the transparent zones. And depending on the algorithms, formats, and scaling factors used, it is possible to get new pixels that have a slight variation in "how much" transparency is needed. This can result in using a small percentage of the alpha-mask-color/background-defined-color as part of the pixel color calculation.

Seventh, people don't want to pay the bandwidth, disk space, and loading/calculation costs associated with higher resolution images if what they get to see is a lower resolution image anyway. You may well have reasons why you want to do it, but I'm having a hard time coming up with what they might be. If you are always going to scale it smaller before it is made visible, scale it outside of renpy and distribute the already scaled images.

Eighth, I'll mention that you can use a blend transition zone to clean up hard lines like that if you can't find another way to suit your needs. Basically, don't have any hard border on your overlay. Have your overlay go out as far as the overlay is mandatory and from that point use a gradient to slowly transition out to full transparency rather than doing it all at once.
I was using trimmed image because I saw in some tutorial on how to get diff clothes for characters they were using trimmed image of character in diff clothes. They also said that render to create the second clothe can be quicker if you hide other background things in DAZ around the character and only keep things that will be visible in trim image.... Is that not right way to do it? I am very new to rendering and DAZ so apology that I do not know best practices.
Similarly, I read somewhere that using hgher resolution image and downscaling it in renpy removes slight noise in image if it is present.... that is why was doing it like this....
Your advice is very helpful.... would be very grateful if you could tell that I am doing it wrong and what you would suggest..
 

redle

Active Member
Apr 12, 2017
618
1,080
I was using trimmed image because I saw in some tutorial on how to get diff clothes for characters they were using trimmed image of character in diff clothes. They also said that render to create the second clothe can be quicker if you hide other background things in DAZ around the character and only keep things that will be visible in trim image.... Is that not right way to do it? I am very new to rendering and DAZ so apology that I do not know best practices.
Similarly, I read somewhere that using hgher resolution image and downscaling it in renpy removes slight noise in image if it is present.... that is why was doing it like this....
Your advice is very helpful.... would be very grateful if you could tell that I am doing it wrong and what you would suggest..
TLDR: It is mostly personal preference and/or the needs of the specific project being created.

It is true that scaling an image can remove some noise, but this is because scaling an image to a smaller size is basically the same thing as blurring it. Think of it this way, you start with a 2x2 grid of pixels and you scale it down to a 1x1 grid. You are scaling it in half. Well, if you have 1 bad pixel colored black (i.e. noise) and 3 good pixels colored blue, the average is a 75% blue pixel and all the black is blended in. You blurred them together. Scaling and/or blurring does reduce noise at the cost of losing some detail. If you rendered the 1x1 pixel image directly, most times you would get 100% blue as your final result. But every now and then you might get that 100% black. Most things are trade-offs and you need to choose which aspects are important to you and where you want to spend your time and effort.

Either way though, do not scale smaller at runtime. You are making your download significantly larger for no benefit. If you decide to render large and scale down, do so with image-editing tools before your release and distribute the correct size image.

Which is better? To use whole images for every display your game shows, or use a background image with overlays to make changes... There is no right or wrong answer here. Both techniques work and have some benefits and negatives. Just find which way works best for you. People who use the overlay technique often will render background (i.e. only the room) images. Then all of the person and clothing images are overlays. You can render the naked man as part of the background, but you limit the usefulness of that background image. You can not then slide that character to the left or change his pose. You can not replace that character with a different character. Whatever your needs you should never need to add an overlay to "restore" the background (i.e. erasing the character by having an overlay that is the wall).

If you delete or hide objects within your scene before rendering, then yes, your rendering time will generally go down. Whether or not shorter render times actually saves you time is a different question. When you render partial scenes at some point they must be combined to make a full scene. Whether you do it at game-time in ren'py or do it in an image editor there is at least a little bit more work needed for you to make the final look that the player sees than if Daz made the final image every time.

Also note that hiding objects changes where shadows occur as well as light reflection. Just because an object is not directly visible in your picture does not mean it is not altering your picture. Once again there is no right or wrong here. Shadows can be added or removed intentionally or accidentally. I simply mention them because it is something to consider when deciding what process works best for you.
 

wooyme

New Member
Oct 6, 2023
1
0
I got my system back today (so costly, the repair :eek:) and I tried the images again. This time I also tried rendering an topless image (top layer in layered image) as 1920x1080 so I wouldn't have to scale it in renpy and this time it didn't show any outline.

So it was definitely being caused by rescaling. Probably like redle mentioned, the scaling down was messing with corner pixels between transparent and non-transparent of image.

BIG Thank you anne O'nymous , 79flavors and redle for advice.

For others reading this post later... my ISSUE IS SOLVED

Got SOLVED by not rendering top image as higher resolution and using im.Scale() to bring it down. Just rendered in resolution of screen in game and got perfect result.

View attachment 1865159
Hello TheGrinReaper,
I encountered the same problem as you. When I use LayeredImage to combine several images, there are outlines between them that should not appear. I read the above answers, but I still don't quite understand how you solved this problem. Could you please provide some code for my reference?
 

TheGrinReaper

Member
Feb 21, 2022
148
2,557
Hello TheGrinReaper,
I encountered the same problem as you. When I use LayeredImage to combine several images, there are outlines between them that should not appear. I read the above answers, but I still don't quite understand how you solved this problem. Could you please provide some code for my reference?
I was facing the issue because when I was using different layers, I was rendering at 2k, cutting layers and then downscaling them within renpy using im.scale(). In this case, when the downscaling happened, the pixel edges of the layers with transparency were being affected by whatever mathematical formula was being used to downscale the group of pixels at the edge, causing a slight difference in the colour at the very edge of the layer compared to the background, making the edges visible.
I solved it by:
1. downscaling images before cutting layers externally in a software
2. Making 1080p images for both background and foreground (so no need to downscale)

This solved my issues.
Does this help you?

-TGR