I actually do something similar to that. I don't use a 256x256 px image because I'm okay with the 512x512 px one generated by the first stage of my process, but I do save only that as a PNG, and then I use another node to save my upscaled images as 90% compression JPGs so I still have high quality with only minimal loss of quality, but at a fraction of the file size.What do y'all think about having a subflow dedicated to saving a workflow file using a mere 256x256 render with a watermark?
Hmm. This way you can keep every generated image forever. Pretty brilliant.
This is the image-2-image workflow from a few posts above, except comes in a ~100kb package instead of the 5mb.
View attachment 3132891
Any chance you can post a simplemost workflow showing which nodes do the JPEG conversion?I actually do something similar to that. I don't use a 256x256 px image because I'm okay with the 512x512 px one generated by the first stage of my process, but I do save only that as a PNG, and then I use another node to save my upscaled images as 90% compression JPGs so I still have high quality with only minimal loss of quality, but at a fraction of the file size.
Correction, I'm saving the second stage image after my first upscale to ~1024px, just because my first stage is... well, complete crap and barely resembles the final picture, so it wouldn't be a good reference to what I would be getting in the final image.Any chance you can post a simplemost workflow showing which nodes do the JPEG conversion?
ymc-node-suite-comfyui
pack. My workflow is basically made of 5 steps:so cool.When you have bad ideas but since you've gotten this far you might as well finish.
At the time of writing i've passed generating image 520 in the "series" and still not half way, and i just noticed that i've been generating 1288x720 instead of 1280x720
Restarting isn't exactly an option so one more fuckup to the list...
Anyway, idea was to test making longer clips at larger image size and at a decent framerate, without using the animating features that exist. So pushing sd1.5 to generate larger non upscaled/highres.fixed images and keeping every finger crossed...
This is a small preview cut down in size, length and compressed to fit for a gif to post. I'll do a more detailed rewrite up once this thing finishes or breaks completely.
View attachment 3132582
As a side note, you can "easily" queue up over 1000 prompts in comfyui, not sure it's gonna be needed for all that much but i guess that's one more thing tested for a limit
An this is why I don't use ComfyUI. Loaded the png, gave me massive amount of errors from missing stuff, tried to install them using manager and failed. Spent half an hour trying to fix it, failed. Gave up, loaded A1111 and was making pics within 30 secs.Correction, I'm saving the second stage image after my first upscale to ~1024px, just because my first stage is... well, complete crap and barely resembles the final picture, so it wouldn't be a good reference to what I would be getting in the final image.
As for what node I use to save as JPEG, it's the Image Save node from theymc-node-suite-comfyui
pack. My workflow is basically made of 5 steps:
1. Generate a ~512px image based on my prompt without using any LoRAs, using a model that gives me a lot of variety of poses and compositions (I like to use Dreamshaper)
2. Run that image through ControlNet to get composition information
3. Use the ControlNet as source for another new ~512px generation, now using my "high quality" model and the LoRAs I use to get likeness of certain cosplayers.
4. Upscale that to ~1024px, still using the LoRA, and using standard img2img and upscale models. Nothing too fancy, and no latent upscale (that tends to ruin LoRA likeness IMO)
5. Run it through FaceDetailer to improve faces.
If all of that works well I use a second workflow to upscale the images once again for final, ~2048 resolution.
View attachment 3132932
And here is the result, before and after improvements. You should be able to get the actual workflow with the PNG below, the one with the crappy face quality
View attachment 3132959 View attachment 3132960
Example of how that step-by-step process works in actual generation:
View attachment 3133264
View attachment 3133267
That's quite misleading, because while A1111 includes information about generation in PNG files as well, it will only include that of the last generation, so any image created through multiple steps will be even more opaque than those of Comfy as to how they were created, and it provides no method whatsoever to figure out if a plugin was used. That isn't a "better" solution, anymore than a Honda Accord is a better tool for transporting goods because it doesn't have as many moving parts as a 747, it's just a simpler one.An this is why I don't use ComfyUI. Loaded the png, gave me massive amount of errors from missing stuff, tried to install them using manager and failed. Spent half an hour trying to fix it, failed. Gave up, loaded A1111 and was making pics within 30 secs.
It's great when it works and opaque as fook when it doesn't
I think lots of components the Manager tries to install require Microsoft Build tools as a prerequisite. If you give it another go, do start by installing those runtimes.An this is why I don't use ComfyUI. Loaded the png, gave me massive amount of errors from missing stuff, tried to install them using manager and failed. Spent half an hour trying to fix it, failed. Gave up, loaded A1111 and was making pics within 30 secs.
It's great when it works and opaque as fook when it doesn't
I honestly forgot about that because I installed it so long ago, but you are 100% correct.I think lots of components the Manager tries to install require Microsoft Build tools as a prerequisite. If you give it another go, do start by installing those runtimes.
I am on my tenth CUI install cause there were issues halfway in each of those.
So, base kinda needs to look like this to have a decent chance of using CUI's advanced nodes:
View attachment 3133838
just something i noticed when looking at the image so i might be misreading things, but it says "enabled: off" in the nodeI can't get face swap to work - wf attached.
Any ideas how to start troubleshooting? It just doesnt's work and there are no error messages to look up.
View attachment 3133850
"Enabled: off"? - not the clearest thing ever. How to tell you're working with an open source project!just something i noticed when looking at the image so i might be misreading things, but it says "enabled: off" in the node
you're possibly aware but there are several "lora stacking" nodes that lets you easily have multiple loras in the same node with slight difference in options. There's one inFurther to this, here is where you combine a file load and file preview into a single node.
Meaning you can add file preview to any node with an image immediately addressing some clutter.
Yea, that's a brilliant feature.
The next candidate are face restore modules and all those VAE patches. Then LORAs and the checkpoint, controlnets, on and on.
View attachment 3134641
comfyroll
and one in efficiency
i believe, can't remember if there was more atm.