[Stable Diffusion] Prompt Sharing and Learning Thread

Fuchsschweif

Active Member
Sep 24, 2019
954
1,513

It's what I use to answer these questions whenever I have them.
1280x720 is kinda high to start out.. but I guess I can just start with half these values, if I am correct that 2x upscaling means doubling the resolution..
 

hkennereth

Member
Mar 3, 2019
228
740
1280x720 is kinda high to start out.. but I guess I can just start with half these values, if I am correct that 2x upscaling means doubling the resolution..
When rendering with SD1.5 based models, I always keep one of the dimensions 512px, and just increase the other to get the screen ratio I want.

For SDXL there's actually a slightly different math since they recommend that you make images with more or less the same average number of pixels as a square 1024 x 1024px image, so if you increase one edge to change the ratio, the recommendation is decreasing the other as well. So to help with that, there is a custom node I have installed that has a list of preset pixel ratios that you input into the Empty Latent Image node. Like so:
1706895838124.png
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Active Member
Sep 24, 2019
954
1,513
When rendering with SD1.5 based models, I always keep one of the dimensions 512px, and just increase the other to get the screen ratio I want.
But YYYYx512 upscaled by two wouldn't make 1080, would it? That would equal 1024. That's why I thought I have to use values that, doubled or tripled up, create the desired target resolution.

Or am I getting wrong how upscaling works.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
But YYYYx512 upscaled by two wouldn't make 1080, would it? That would equal 1024. That's why I thought I have to use values that, doubled or tripled up, create the desired target resolution.

Or am I getting wrong how upscaling works.
I believe there might be an option for upscaling to a target resolution rather than using multiplier. I have no idea how it works in ComfyUi though.. :geek: If not then maybe you can use a factor of 2.1 etc .
 
Last edited:
  • Like
Reactions: Sepheyer

hkennereth

Member
Mar 3, 2019
228
740
But YYYYx512 upscaled by two wouldn't make 1080, would it? That would equal 1024. That's why I thought I have to use values that, doubled or tripled up, create the desired target resolution.

Or am I getting wrong how upscaling works.
No, you're not wrong. But you can upscale both by a multiplier (like 2x), or to an exact size as well, if you want to go that route.
1706908213769.png
It's also not the end of the world if you use a slightly different image size to start with, I said I always start with a 512px because... that's what works for my needs, I usually don't need to upscale directly to a discreet screen size, and sticking to 512px or close avoids issues with character duplication. I'll usually upscale past the point I need, and downscale/crop later if needed.

But you can just start with a 960 x 544 px latent image (the values need to be multiple of 64, so no option to start at 540 for a direct 2x upscale to 1080), and crop the image from 1088 to 1080px later. There's even a node to do that directly inside Comfy, I don't think the 8px are worth trying to crop it in Photoshop or similar to choose which way to crop.
1706909004801.png
 

me3

Member
Dec 31, 2016
316
708
So i'm struggling slightly with providing much details on this, i've included the prompt i started with, but i'm not sure how much help it'll be.
The reason i guess can be summed up with "inpainting"...
The general (and very repeating) workflow is just (re)loading image, drawing mask(s) and painting in more/new/replacing.
Somewhere along the line i've managed to get a white line at the top and bottom, no idea when or why that showed up.
Then when i did the final slight denoising and upscaling to "blend" things, it made it look like she has dried snot coming up and out from her left nostril...classy lady, probably has bigger things on her mind.

inpaint_0001.jpg

You don't have permission to view the spoiler content. Log in or register now.
 
  • Like
Reactions: VanMortis

hkennereth

Member
Mar 3, 2019
228
740
So i'm struggling slightly with providing much details on this, i've included the prompt i started with, but i'm not sure how much help it'll be.
The reason i guess can be summed up with "inpainting"...
The general (and very repeating) workflow is just (re)loading image, drawing mask(s) and painting in more/new/replacing.
Somewhere along the line i've managed to get a white line at the top and bottom, no idea when or why that showed up.
Then when i did the final slight denoising and upscaling to "blend" things, it made it look like she has dried snot coming up and out from her left nostril...classy lady, probably has bigger things on her mind.

View attachment 3322590

You don't have permission to view the spoiler content. Log in or register now.
I believe the technical description of the cause of those lines around the image is "it do be like that sometimes".

It just happens. Sometimes on first generation images, sometimes due to upscaling... but no particular reason, it's just an SD thing. If it bothers me too much I use something like Context-Aware Fill on Photoshop or Photopea to remove it on the final image. Most times I don't even bother.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
So i'm struggling slightly with providing much details on this, i've included the prompt i started with, but i'm not sure how much help it'll be.
The reason i guess can be summed up with "inpainting"...
The general (and very repeating) workflow is just (re)loading image, drawing mask(s) and painting in more/new/replacing.
Somewhere along the line i've managed to get a white line at the top and bottom, no idea when or why that showed up.
Then when i did the final slight denoising and upscaling to "blend" things, it made it look like she has dried snot coming up and out from her left nostril...classy lady, probably has bigger things on her mind.

View attachment 3322590

You don't have permission to view the spoiler content. Log in or register now.
I would suggest the same as hkennereth. Simply fix those things in photoshop, it's much easier and faster than trying to chase down the issue and then try to get the same image again. Knowing SD, it's not gonna happen..
For the white edges I would either use fill (content aware) or simply crop it. For the "snot", I would try the stamp tool or possibly fill (content aware).
 
  • Like
Reactions: hkennereth

Fuchsschweif

Active Member
Sep 24, 2019
954
1,513
Hey guys, I've got this workflow for upscaling purposes but it crashes my computer.

I'd need a latent output so I can create a tiled vae encoder between the sampler and the final saved image, in order to prevent these crashes. But there's only an image output from the SD Upscale.

Any idea what I could put in there in order to get through a tiled VAE encoder?

1707072421453.png
 

Fuchsschweif

Active Member
Sep 24, 2019
954
1,513
I don't have controlnet, but online it states everywhere that SD ultimate upscale and controlnet are used together. And I just found this:

"
What does ControlNet Tile do?
At its core, ControlNet Tile operates like an intelligent tiling system. Rather than arbitrarily enlarging your image, it carefully assesses and accurately replicates every tiny section in 512x512 chunks. By working on these manageable pieces, it can pay attention to the intricacies of each segment. When these sections are finally pieced together, the end result is a larger image that hasn't lost any detail. In fact, with the additional power of text-driven guidance, it often introduces new, harmonious details. So, instead of a simple upscale, what you get is an image that not only retains but often surpasses the original's clarity and vibrance. I've tried to incorporate a clearer understanding of the process while maintaining a reader-friendly tone.
"

Do I need that too in order to make it work?
 

hkennereth

Member
Mar 3, 2019
228
740
Okay Fuchsschweif... you're going over a ton of different questions that have different unrelated answers... and you have a picture of a workflow that honestly raises more questions than it answers (why are you calculating the upscale factor from the source image there, instead of just inputting something like 2x into the Ultimate Upscale node itself?).

No, you don't NEED ControlNet Tile to use upscale, be it normal upscale or with the Ultimate Upscale node... but it can help. It is not a silver bullet solution to every problem, and it's not guaranteed that you will even get good results from it. It's the kind of thing you should test yourself, and see if it works for your needs because different types of images need different upscaling workflows. But using ControlNet increases processing time and memory usage, so if you're using a limited and old graphics card that will only make things worse.

None of that probably has anything to do with your crashing problem... which you need to be more specific about anyway. What do you mean by "crashes your computer"? Are you getting a blue screen and having to reboot? Is Comfy crashing and closing? Is Comfy just refusing to render that image and showing a red error screen? These are very different situations with different potential workarounds, and if you mean the first case (which is the only one I would actually describe as "crashing your computer"), it probably has very little to do with your specific workflow, that is a more likely a hardware issue.

You need to be specific in your questions if you want to get good answers.

Edit: also, I forgot to mention that: you don't need a Tiled VAE Encoder/Decoder there. The Upscale node already outputs a pixel file, so there is no reason or point to encoding the image back to a latent image, just to decode it back to pixels so you can save it. That has zero to do with your crashing issue.
 
Last edited:

Fuchsschweif

Active Member
Sep 24, 2019
954
1,513
Okay @Fuchsschweif... you're going over a ton of different questions that have different unrelated answers... and you have a picture of a workflow that honestly raises more questions than it answers (why are you calculating the upscale factor from the source image there, instead of just inputting something like 2x into the Ultimate Upscale node itself?).
The workflow was shared and explained by one of the developers at OpenAI itself, who work together with ComfyUI.

The upscale factor is calculated from the source image itself, so that you can throw in whatever picture you want and upscale it as many times as you want. You don't have to do the math anymore since these mathematical operators will gather the width and height of the original and feed the correct values into the upscaler, so that you always get the correct upscale values out of it.

But using ControlNet increases processing time and memory usage, so if you're using a limited and old graphics card that will only make things worse.
"Controlnet tile" sounded like another tiled encoder/decoder to me, so since those are the ones preventing my system from crashes, I thought they might be a solution.

None of that probably has anything to do with your crashing problem... which you need to be more specific about anyway. What do you mean by "crashes your computer"?
The computer completely shuts off immediately, no bluescreen no nothing. I have to take it from the power for 5-10 seconds in order to turn it on again, which points at a problem with the memory needing to unload. I already explained this in the past pages.

That's why the tiled VAE encoder and decoder prevent my system from crashing, they work around the issue that probably my VRAM is going over its limit so that everything shuts down.

With the tiled VAE versions I've had 0 crashes, while the regular ones often crash my system when they reach 99% and are about to drop the final picture.

Edit: also, I forgot to mention that: you don't need a Tiled VAE Encoder/Decoder there. The Upscale node already outputs a pixel file, so there is no reason or point to encoding the image back to a latent image, just to decode it back to pixels so you can save it. That has zero to do with your crashing issue.
I know that, and yet it has. The crash is happening because the VRAM is probably coming to its limits. A tiled VAE encoder would help me to prevent that. So I thought about whether there's an option to incorporate a tiled version of an encoder into this workflow. I tried the switch at the bottom of the upscaler but that one didn't work.

I am pretty sure I won't be able to fix these shutdowns. This happend with A1 and comfyUI and doesn't happen with any other heavy rendering or processing software, so I don't think it's my hardware, otherwise I would it always expect to crash when things get heavy. And like I said on previous pages, it only happens when using upscalers and when these are at 99% and go into the VAE decode stage. So it's not a problem of overheating or a faulty PSU, otherwise it would be expected to happen more randomly, not always at the exact same stage.

But if you got any other idea how to troubleshoot the issue, let me know :p
 
Last edited:

hkennereth

Member
Mar 3, 2019
228
740
The upscale factor is calculated from the source image itself, so that you can throw in whatever picture you want and upscale it as many times as you want. You don't have to do the math anymore since these mathematical operators will gather the width and height of the original and feed the correct values into the upscaler, so that you always get the correct upscale values out of it.
Yeah, but that's why it doesn't make sense. The original field is already a numerical factor, the default value is 2. That means that it will take whatever image size you put there, and make it two times larger. It doesn't matter what the original image size is. I'm not sure what that thing is trying to accomplish there, but it seems mostly useless to me.

"Controlnet tile" sounded like another tiled encoder/decoder to me, so since those are the ones preventing my system from crashes, I thought they might be a solution.
It's not, ControlNet is a way to get information from a source image and use it to help create a new one by providing information about the general shape of the source. In particular, the Tiled processor for ControlNet is to be used in conjunction with tiled upscalers, breaking the source image into discreet "slices" that are upscaled one at a time, and then merged together to build the larger image. It CAN be used in conjunction with Ultimate SD Upscaler, but it won't do anything to help with your issue, it will just add more VRAM usage since ControlNet requires some considerable amount of VRAM to use.

None of that has anything to do with the Tiled VAE Encoder/Decoder, which is a way to optimize the conversion of images from Stable Diffusions internal Latent Space images (basically a type of image that is encoded in a way that the AI can understand it) and the pixel formats that can be displayed for the user (i.e. you and me) and saved into JPG/PNG files.

The computer completely shuts off immediately, no bluescreen no nothing. I have to take it from the power for 5-10 seconds in order to turn it on again, which points at a problem with the memory needing to unload. I already explained this in the past pages.

(...)

With the tiled VAE versions I've had 0 crashes, while the regular ones often crash my system when they reach 99% and are about to drop the final picture.
That really sounds like a hardware issue to me, probably related to overheating. There is nothing about Stable Diffusion or Comfy UI that can cause computer crashes like that, and I can assure you that there isn't much you can do with Comfy to completely solve this for you because computers don't crash when they run out of VRAM, they just fail to execute (like the red error messages I asked you were seeing). The fact you haven't had this happen when using Tiled VAE Codecs is just a coincidence because they for whatever reason haven't made the computer run as hot, but that isn't the underlying issue, and focusing on this part is just a band-aid. Open your machine and check if your fans are working properly.

The crash is happening because the VRAM is probably coming to its limits. A tiled VAE encoder would help me to prevent that. So I thought about whether there's an option to incorporate a tiled version of an encoder into this workflow. I tried the switch at the bottom of the upscaler but that one didn't work.
The Ultimate SD Upscaler doesn't need a VAE Encoder/Decoder because it does that internally. Images need to be in Latent Space for the KSamplers to generate an image, and they need to be in Pixel format for image manipulation like cropping. That plugin slices the image into a grid of overlapping image tiles, and then starts encoding each tile, running it through an img2img Ksampler, decodes it, and then does the same for the next tile. Then it takes all these pixel images and puts them all together back into that grid, slightly over each other with faded edges to help avoid the edges between tiles being visible.

Since all of that happens in pixel format, not latent format, there is no need to decode the final resulting image, it's already decoded; hence why the only output node says IMAGE, not LATENT_IMAGE. And because each slice is usually small enough (as defined by the Tile Width and Tile Height properties, both with default 512px), this internally happens in a size that doesn't require Tiled VAE Codecs, these are ONLY necessary if the latent image is too large for decoding with the standard codecs, which is never the case for 512px images. You seem to have a misunderstanding of what the Tiled VAE Codecs do; there is no magic to them that prevents crashes, so much that using the standard VAE Encoder/Decoder nodes will automatically switch to using Tiled mode if the image is too large to be converted normally after it fails the first attempt.

In summary, you are focusing on the wrong problem. As I said, "VRAM coming to its limits" should NEVER cause computer crashes, it should just stop whatever process from running. The real issue here is your machine probably overheating, or perhaps having other hardware related issues that are happening WHILE using SD, but NOT BECAUSE of SD. Adding unnecessary nodes to that workflow won't solve your issue.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Active Member
Sep 24, 2019
954
1,513
Yeah, but that's why it doesn't make sense. The original field is already a numerical factor, the default value is 2. That means that it will take whatever image size you put there, and make it two times larger. It doesn't matter what the original image size is. I'm not sure what that thing is trying to accomplish there, but it seems mostly useless to me.
If you have an original image of 1024x1024 or 1920x1080 matters, because you would always need to input the correct numbers. But this way it will gather the height and width and then only upscale it by twice or four times or as much as you wish. So that you don't have to change the values manually for any picture you load in.

Here's the video:

That really sounds like a hardware issue to me, probably related to overheating.
This can't be. With the tiled VAE encoder I can produce 50 pictures in a row and get my machine really to give me all its power without any crash. I can easily upscale to 1920x1080. ( I never had a single shutdown when I use the tiled VAE encoder with probably by now over 200-300 generations)

With the normal VAE encoder I already get a shutdown when I only want to upscale a single picture from 512x512 to 1024x1024.

Since creating 50 pictures in a row at a higher resolution would overheat my computer more likely, this can't be the issue. Also, my GPU never goes above 60-70°C.


and I can assure you that there isn't much you can do with Comfy to completely solve this for you because computers don't crash when they run out of VRAM
Well, whatever it is, it has to do with something that the encoder requires, and that's as far as I am concerned the vram. There seems to be an issue when dropping the final picture (at 99%) that causes my computer to shut off. And the tiled version prevents this. Since the tiled versions splits up the process in different parts, so that VRAM usage is lower, that's my suggestion.

Also the fact that I have to deprive the computer from energy for 5-10 seconds in order to reboot speaks for a vram issue, because storage only entirely unloads when you take a system completely off from power.

That being said, the SD ultimate upscaler did crash my system too, even with the tiled button activated. But the final output was smaller in size than my 1920x1080 generations that work fine with the tiled VAE encoder.. so I don't know how that fits together. All I know is that the only way I seem to be able to upscale is with the tiled VAE encoder.

You seem to have a misunderstanding of what the Tiled VAE Codecs do; there is no magic to them that prevents crashes, so much that using the standard VAE Encoder/Decoder nodes will automatically switch to using Tiled mode if the image is too large to be converted normally after it fails the first attempt.
This seems not to be the case. I have tried both with dozens of generations and the pattern I described above is replicable every time.

If you can put together an idea based on the hints that are available, I am happily all ears. But overheating won't be the issue, and other heavily demanding software does never make my system crash or shutdown, nor do games. In fact, it really only happens when using SD.
 
Last edited:

hkennereth

Member
Mar 3, 2019
228
740
This can't be. With the tiled VAE encoder I can produce 50 pictures in a row and get my machine really to give me all its power without any crash. I can easily upscale to 1920x1080. ( I never had a single shutdown when I use the tiled VAE encoder with probably by now over 200-300 generations)
Look, I don't know what else I could tell you. As I said before, that node doesn't need any VAE Encode or Decoder nodes added before or after it, because it already includes that functionality internally as it does that continuously for each tile it process.

If you don't click that option TILED_DECODE at the bottom of the node, it will first attempt to use the normal codecs, and if these are unable to process the tile size it will automatically switch to the Tiled one. Turning that option on will always use the Tiled Codecs for each step, which is slower for a smaller sized image than the non-tiled version, but turning that on prevents it from wasting time testing the non-tiled version if you know the tiled is necessary. Meaning that if you turn that on, you ARE ALREADY always using the tiled encoder and decoder. If that is not solving your issue, that... is because that is not your issue.

Forgive my bluntness, but you are committing the classic error of mistaking correlation for causation. I don't know WHAT is causing your issue because I don't have access to your machine, but it's not the presence or absence of a Tiled Encoder/Decoder, that's the only certain thing here. Excessive VRAM (or normal RAM for that matter) usage DOES NOT cause machines to crash, freeze, or reboot. That's just not how computers work. Anyone using ComfyUI with less than top-of-the-line graphics cards probably sees the error message "Not enough VRAM" all the time when experimenting with new workflows, and that's all it does: it says "can't do it", and you try again. It does not crash one's machine. That's a problem with YOUR machine, not with ComfyUI or with Stable Diffusion.

I don't know why it doesn't happen on other things, there's a million reasons why that could be the case; but I know it's misguided to assume that ONE thing is the absolute cause despite any evidence to support that beyond "those two things happen in close proximity", and after hearing detailed explanations of why that cannot be the case, including the fact that your proposed solution is already in there, just behind the scenes.
 
  • Like
Reactions: theMickey_

Fuchsschweif

Active Member
Sep 24, 2019
954
1,513
Look, I don't know what else I could tell you. As I said before, that node doesn't need any VAE Encode or Decoder nodes added before or after it, because it already includes that functionality internally as it does that continuously for each tile it process.
I got that, my reply was referring to the idea, that those shutdowns would be overheating-related.

Forgive my bluntness, but you are committing the classic error of mistaking correlation for causation. I don't know WHAT is causing your issue because I don't have access to your machine, but it's not the presence or absence of a Tiled Encoder/Decoder, that's the only certain thing here.
I am not mistaking correlation for causation. I am telling you that with the tiled encoders I never got a single crash in hundreds of generations, while a standard decoder will crash my system instantly even with way less demanding upscalings.

You said my shutdowns would have nothing to do with whether using the tiled or normal nodes, but that's in fact how it is.

I can also not tell you why that is, but it is like that.

It does not crash one's machine. That's a problem with YOUR machine, not with ComfyUI or with Stable Diffusion.
I never claimed that it's SD or comfyui alone. I described when it happens, so that these informations might lead to ideas, what could be happining in the background.

That it only happens with the standard encoders and that it only happens at 100% of the upscaling process, are two valuable hints for you or someone else who knows more than me about how SD works, to maybe come up with a good idea.

And these hints are also enough to figure out that it can't be overheating or hardware related issues, because you would expect them not to occur this precisely. If a system is unstable due to an insufficient PSU or overheating issues, these shutdowns would occur randomly.
 

theMickey_

Engaged Member
Mar 19, 2020
2,115
2,653
You said my shutdowns would have nothing to do with whether using the tiled or normal nodes, but that's in fact how it is.
I can also not tell you why that is, but it is like that.
I'm with hkennereth -- it's most probably an overheating issue. I'm more than 95% sure about that.

If you want to prove it, do the following:
(In fact, I do not recommend doing this, because it can actually damage your hardware):
  • Go into your BIOS/UEFI and turn off every feature that will shut down your computer at a certain temperature level
  • Go into your GPU's driver settings and see if it has a similar setting for shutting down your computer when it gets too hot and turn it off
  • If you do have a 3rd party system monitoring tool that can shut down your computer as well at certain degrees, turn that off
Then run ComfyUI again and your computer will not shut down. You computer or your house might be on fire after this test, but at least that will rule out your "This can't be." response to someone suggesting this might be an overheating problem.


To be fair: I've seen post from you all over this forum in past couple of days: I've answered some of your question in the Virt-a-Mate thread, people have been telling you that DAZ3D isn't the right tool to do real-time animations in the DAZ thread, and you're here to ask about why ComfyUI is causing your computer to crash. All these questions you've been asking tell me that you're new to all this 3D, VR and AI stuff (which is absolutely fine as we all have been newbies at some point and this is what this forum is about -- to ask things and learn stuff), but if someone then gives you a reasonable answer (overheating), please do not just reply "This can't be". At least consider it as a valid point and act accordingly: check your computer system, check the fans, do some tests, check your settings etc..

As hkennereth said: full VRAM will not cause your computer to shut down. It will cause some ComfyUI actions to fail and print error messages, or you might not be able to start additional GPU heavy tasks while ComfyUI is running. Full RAM will also not shutdown your system, instead your OS might start to kill some tasks it does consider not as important anymore. This might lead to programs failing/crashing, but not to a full shutdown of your computer. A crash (without a blue screen, just an instant "turn off") is almost always temperature related. So please start investigating/checking...
 

hkennereth

Member
Mar 3, 2019
228
740
I am not mistaking correlation for causation. I am telling you that with the tiled encoders I never got a single crash in hundreds of generations, while a standard decoder will crash my system instantly even with way less demanding upscalings.

You said my shutdowns would have nothing to do with whether using the tiled or normal nodes, but that's in fact how it is.

I can also not tell you why that is, but it is like that.
Sorry, but you are making that mistake. Yes, those things happened, I believe you, but you are jumping to the conclusion that one is then the CAUSE of the other, and I'm exhaustingly explaining you why that cannot be the case. And you keep insisting that you should be able to add that node somewhere in your workflow to solve the problem, despite the explanation of why that node has no function on that workflow, because the functionality it provides is already happening somewhere else. The node does not perform any magic, and as I explained on the previous post, you can get the exact same functionaly by using that toggle at the bottom, the one you said didn't work for you.

So you don't think I'm just trying to claim to be smarter and you should trust me just because... here is one more attempt to explain to you exactly what the Ultimate SD Upscale node does. Well, a very crappy version of it that fails to do what that node is the best at doing, but one that contains the same fundamental functionality.

The workflow below does the same basic things that the Ultimate SD Upscale node does... except that it does this much better (the image contains the actual usable workflow is you load it into ComfyUI):

workflow.png

It takes an image as an input. My crappy workflow it will only work for a 1024x1024 image, but the node can handle images of any size of course.

It crops that image into a bunch of separate slices, still in pixel mode. Again, my limited workflow is splicing them right at the edge of each other, but the node will crop them with a little bit of overlap, as defined by the TILE_PADDING property, which helps merging the final image more naturally.

It upscales each slice using a pixel upscale model you select, using the multiplication factor you choose in the property UPSCALE_BY. The upscale model is a 4X upscale, so the upscale node is set to 0.5 to get half of that: 2X.

It encodes the image using the VAE into a latent image.

It uses that latent image as source for img2img in the KSampler, using a lot Denoise value to preserve the original image's look.

It decodes the image from latent space into pixels again using the VAE.

It composites each image back together using a mask to help hide the edges between each slice. In my workflow I didn't use any mask and just fit the images side to side without overlap, which will look obvious in the final image.

------

And that's it. All of that is what is happening inside that one node, except it does some additional things to get a decent quality image from it, unlike this example. You can't add a tiled VAE before or after... because the node needs a decoded pixel image as source for its initial steps, and it will automatically output a decoded pixel image at the end since it needs to merge the images together in pixel space.

For reference, here is the original 1024 x 1024 px image, and the crappy composited 2048 x 2048 px upscaled version (but each tile looks... not bad actually):

1707101195878.png 1707101247284.png
 
  • Like
Reactions: sharlotte