[Stable Diffusion] Prompt Sharing and Learning Thread

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
What do y'all think about having a subflow dedicated to saving a workflow file using a mere 256x256 render with a watermark?

Hmm. This way you can keep every generated image forever. Pretty brilliant.

This is the image-2-image workflow from a few posts above, except comes in a ~100kb package instead of the 5mb.
_save__00046_.png
 
Last edited:

hkennereth

Member
Mar 3, 2019
239
784
What do y'all think about having a subflow dedicated to saving a workflow file using a mere 256x256 render with a watermark?

Hmm. This way you can keep every generated image forever. Pretty brilliant.

This is the image-2-image workflow from a few posts above, except comes in a ~100kb package instead of the 5mb.

View attachment 3132891
I actually do something similar to that. I don't use a 256x256 px image because I'm okay with the 512x512 px one generated by the first stage of my process, but I do save only that as a PNG, and then I use another node to save my upscaled images as 90% compression JPGs so I still have high quality with only minimal loss of quality, but at a fraction of the file size.
 
  • Heart
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
I actually do something similar to that. I don't use a 256x256 px image because I'm okay with the 512x512 px one generated by the first stage of my process, but I do save only that as a PNG, and then I use another node to save my upscaled images as 90% compression JPGs so I still have high quality with only minimal loss of quality, but at a fraction of the file size.
Any chance you can post a simplemost workflow showing which nodes do the JPEG conversion?
 

hkennereth

Member
Mar 3, 2019
239
784
Any chance you can post a simplemost workflow showing which nodes do the JPEG conversion?
Correction, I'm saving the second stage image after my first upscale to ~1024px, just because my first stage is... well, complete crap and barely resembles the final picture, so it wouldn't be a good reference to what I would be getting in the final image.

As for what node I use to save as JPEG, it's the Image Save node from the ymc-node-suite-comfyui pack. My workflow is basically made of 5 steps:

1. Generate a ~512px image based on my prompt without using any LoRAs, using a model that gives me a lot of variety of poses and compositions (I like to use Dreamshaper)
2. Run that image through ControlNet to get composition information
3. Use the ControlNet as source for another new ~512px generation, now using my "high quality" model and the LoRAs I use to get likeness of certain cosplayers.
4. Upscale that to ~1024px, still using the LoRA, and using standard img2img and upscale models. Nothing too fancy, and no latent upscale (that tends to ruin LoRA likeness IMO)
5. Run it through FaceDetailer to improve faces.

If all of that works well I use a second workflow to upscale the images once again for final, ~2048 resolution.

1701438303293.png

And here is the result, before and after improvements. You should be able to get the actual workflow with the PNG below, the one with the crappy face quality :)

midres_0030.png fullres_dd_0008.jpg

Example of how that step-by-step process works in actual generation:
1701440832236.png
1701440855571.png
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
When you have bad ideas but since you've gotten this far you might as well finish.
At the time of writing i've passed generating image 520 in the "series" and still not half way, and i just noticed that i've been generating 1288x720 instead of 1280x720 :(
Restarting isn't exactly an option so one more fuckup to the list...
Anyway, idea was to test making longer clips at larger image size and at a decent framerate, without using the animating features that exist. So pushing sd1.5 to generate larger non upscaled/highres.fixed images and keeping every finger crossed...
This is a small preview cut down in size, length and compressed to fit for a gif to post. I'll do a more detailed rewrite up once this thing finishes or breaks completely.
View attachment 3132582

As a side note, you can "easily" queue up over 1000 prompts in comfyui, not sure it's gonna be needed for all that much but i guess that's one more thing tested for a limit :p
so cool.:) (y)
 
  • Like
Reactions: Sepheyer

Sharinel

Active Member
Dec 23, 2018
611
2,571
Correction, I'm saving the second stage image after my first upscale to ~1024px, just because my first stage is... well, complete crap and barely resembles the final picture, so it wouldn't be a good reference to what I would be getting in the final image.

As for what node I use to save as JPEG, it's the Image Save node from the ymc-node-suite-comfyui pack. My workflow is basically made of 5 steps:

1. Generate a ~512px image based on my prompt without using any LoRAs, using a model that gives me a lot of variety of poses and compositions (I like to use Dreamshaper)
2. Run that image through ControlNet to get composition information
3. Use the ControlNet as source for another new ~512px generation, now using my "high quality" model and the LoRAs I use to get likeness of certain cosplayers.
4. Upscale that to ~1024px, still using the LoRA, and using standard img2img and upscale models. Nothing too fancy, and no latent upscale (that tends to ruin LoRA likeness IMO)
5. Run it through FaceDetailer to improve faces.

If all of that works well I use a second workflow to upscale the images once again for final, ~2048 resolution.

View attachment 3132932

And here is the result, before and after improvements. You should be able to get the actual workflow with the PNG below, the one with the crappy face quality :)

View attachment 3132959 View attachment 3132960

Example of how that step-by-step process works in actual generation:
View attachment 3133264
View attachment 3133267
An this is why I don't use ComfyUI. Loaded the png, gave me massive amount of errors from missing stuff, tried to install them using manager and failed. Spent half an hour trying to fix it, failed. Gave up, loaded A1111 and was making pics within 30 secs.

It's great when it works and opaque as fook when it doesn't
 

hkennereth

Member
Mar 3, 2019
239
784
An this is why I don't use ComfyUI. Loaded the png, gave me massive amount of errors from missing stuff, tried to install them using manager and failed. Spent half an hour trying to fix it, failed. Gave up, loaded A1111 and was making pics within 30 secs.

It's great when it works and opaque as fook when it doesn't
That's quite misleading, because while A1111 includes information about generation in PNG files as well, it will only include that of the last generation, so any image created through multiple steps will be even more opaque than those of Comfy as to how they were created, and it provides no method whatsoever to figure out if a plugin was used. That isn't a "better" solution, anymore than a Honda Accord is a better tool for transporting goods because it doesn't have as many moving parts as a 747, it's just a simpler one.

If you have questions I'll be happy to assist you so you can get the workflow working. In general I don't even recommend anyone trying to use other people's workflows, it's ALWAYS better to make your own from scratch so you understand what each node is doing. Start with something simple, and add more as needed. I doubt most people need all the stuff I have on mine.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
An this is why I don't use ComfyUI. Loaded the png, gave me massive amount of errors from missing stuff, tried to install them using manager and failed. Spent half an hour trying to fix it, failed. Gave up, loaded A1111 and was making pics within 30 secs.

It's great when it works and opaque as fook when it doesn't
I think lots of components the Manager tries to install require Microsoft Build tools as a prerequisite. If you give it another go, do start by installing those runtimes.

I am on my tenth CUI install cause there were issues halfway in each of those.

So, base kinda needs to look like this to have a decent chance of using CUI's advanced nodes:
preq.png
 
  • Like
Reactions: hkennereth

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
I can't get face swap to work - wf attached.

Any ideas how to start troubleshooting? It just doesnt's work and there are no error messages to look up.
reactor.png
 

hkennereth

Member
Mar 3, 2019
239
784
I think lots of components the Manager tries to install require Microsoft Build tools as a prerequisite. If you give it another go, do start by installing those runtimes.

I am on my tenth CUI install cause there were issues halfway in each of those.

So, base kinda needs to look like this to have a decent chance of using CUI's advanced nodes:
View attachment 3133838
I honestly forgot about that because I installed it so long ago, but you are 100% correct.
 
  • Like
Reactions: Sepheyer

Jimwalrus

Well-Known Member
Sep 15, 2021
1,054
4,040
just something i noticed when looking at the image so i might be misreading things, but it says "enabled: off" in the node
"Enabled: off"? - not the clearest thing ever. How to tell you're working with an open source project!

Then again, even the mighty Microsoft once issued the fabulous Help text:
"When the Start Windows Restart When Windows Starts checkbox is checked, Windows Restart will start everytime Windows is started"
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
Further to this, here is where you combine a file load and file preview into a single node.

Meaning you can add file preview to any node with an image immediately addressing some clutter.

Yea, that's a brilliant feature.

The next candidate are face restore modules and all those VAE patches. Then LORAs and the checkpoint, controlnets, on and on.

workflow (1).png
 
Last edited:
  • Like
Reactions: me3

me3

Member
Dec 31, 2016
316
708
Further to this, here is where you combine a file load and file preview into a single node.

Meaning you can add file preview to any node with an image immediately addressing some clutter.

Yea, that's a brilliant feature.

The next candidate are face restore modules and all those VAE patches. Then LORAs and the checkpoint, controlnets, on and on.

View attachment 3134641
you're possibly aware but there are several "lora stacking" nodes that lets you easily have multiple loras in the same node with slight difference in options. There's one in comfyroll and one in efficiency i believe, can't remember if there was more atm.
Both of those also has some grid/XY nodes, never used those nodes so no idea how they work though
 

me3

Member
Dec 31, 2016
316
708
Going by timestamps on the first and last image, it took over 15 hours to generate all the images, so next time i'll try something shorter :p

So the basic idea was to take video/clip of someone "dancing", split that into frames and process those to get the poses. In this case that gave >1400 pose images, some had to be cleaned out due to bad capturing or other issues, leaving slightly below 1400.
Then i'd use those images to generate a character using those pose images and hope things worked out. Since the pose is all you really care about you can keep a fixed seed, but it still causes quite the variance in output. So trying to keep a simple background is a bit of a challenge. In hindsight i probably should have used a lora of a character that had a completely fixed look, including outfit, but as this was just intended as a concept/idea test it doesn't matter that much.
I'd intentionally set things up so that i had each pose as a separate file and to keep every generated image in the same numerical order and not depend on counters/batch processing in case things broke or i had to redo some specific image.
Looping through each pose is simple enough anyway and not loading everything into memory also helps with potential OOM issues.
To save time and to test LCM while i were at it, i used just a normal SD1.5 model with a LCM weight lora, so images were generated in just 5 steps, same with face restore. So in this case that lora did a fair job.
So after merging all the frames back together and some processing, i had a ~42sec 60fps clip of a AI woman roughly moving in the expected way and with some additional arm swinging and head warping due to not fixing enough of the pose images and prompt.
I can't post the file in full on forum due to size, adding a downscaled 24fps version and 2 full size images. There's odd jumps/cuts and movements due to frames having to be cut out in poses or bad generating. This wasn't a test of how perfect it would be, but "will it work", so i didn't bother fixing all those things. And tbh with 1400 poses and the same amount of images, i'd rather not go over all of them multiple times just for this type of test.
There are at least some sections that aren't too bad.

View attachment scaled24fps.webp

k_0183.jpg
k_0418.jpg

Too keep this from becoming even more of a wall-of-text i'll put the rest in spoilers should anyone care for details.
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.

Credit to Mr-Fox for his Kendra lora, usually has a download link in his sig. Though can really say these images do her justice, but any PR is good PR right...
 

theMickey_

Engaged Member
Mar 19, 2020
2,248
2,938
Hey Sepheyer, sorry to "ping" you, but I've just seen this post from you, and as I'm currently trying to get something like this working right now, I though I just asked if you're willing to share your workflow on how you'd achieve that?

So here's what I'm trying to achieve: take any DAZ rendered scene from a game and turn it into a more photo like style.

I've watched so many ComfyUI tutorials in the past couple of days, and I remember seeing one in which a node was used to generate a "positive prompt" for any loaded image (but of course I can't find that specific tutorial anymore :cautious:). Because I don't want to enter any prompt manually, I just want to load any rendered (or even drawn?) image, extract a "positive prompt" out of it and then feed it into a sampler using a photo realism model to basically generate a "real" copy of that image. I might need to add a "combined/concatenated" positive prompt in case I need to add some specifics, but I want to avoid that as much as possible.

I did see a lot of tutorials doing the quite opposite (turning a photo into a drawing or pencil sketch), but I haven't found a single one doing it the other way round. Well, I found one workflow on CivitAI that looked promising, but it was based on text-2-image and not image-2-image...

If you could share any details on how you would do that, that would be awesome!

Thanks a lot in advance.
 
  • Like
Reactions: Sepheyer

theMickey_

Engaged Member
Mar 19, 2020
2,248
2,938
And here's yet another question: It's quite easy and fun to replace faces in images, and I'm literally blown away how easy it was. I replaced some faces in memes with some friend's faces while streaming to them on Discord, and we had so much fun :)

But the method I used so far only replaces the actual face, not the hair. Is there any way to replace the face and the hair in an image, do I have to use the mask feature to achieve that?

Again, thanks a lot in advance to whomever has some advice. And sorry if those are still noob-ish like questions, I've searched through this thread (and other tutorials) but wasn't able to find a working solution...
 
  • Like
Reactions: Sepheyer