[Stable Diffusion] Prompt Sharing and Learning Thread

deadshots2842

Member
Apr 30, 2023
188
290
I have made a1111 like workflow for cui. This is for sd1.*.
I've made it for who want to try and understanding CUI.
Actually is simple but almost complete for text2image generation.
It have wildcard support and Lora loader.
It also use ToMe patch.
Tell me if you are interested and i do update by adding other feautures like upscaler, img2img, impaint.
Install in your CUI and when load click on button "Manager" then "install missing node. View attachment 3590766
I used to have comfy then I deleted it and installed a1111 lol
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,956
No idea, don't use it. It's like PC Master Race (auto1111 with 4090) vs console peasants (comfy) :) I don't hang out with the great unwashed!
It's the other way round, auto1111 equals consoles. Comfy has a better performance and offers also way more customization. Even the Stable Diffusion developers at OpenAI work with ComfyUI.
 

felldude

Active Member
Aug 26, 2017
572
1,695
Danbooru tags what's that can you teach me or point me to youtube videos? And yes I'm using pony xl with lora
I have only ever trained one model on pony and I don't use it that often.

I would look through civitai for prompts as knowing an artist name is key to triggering the training data, along with the image scoring and general prompts

It's the other way round, auto1111 equals consoles. Comfy has a better performance and offers also way more customization. Even the Stable Diffusion developers at OpenAI work with ComfyUI.
I'd have to assume that your replying to a joke comment as the 40 series was marketed toward gamers with "special cores" the actual gpu cores and stats are not much different from the 30 series (Baring DLSS and DLAA marketed time improvements).

With both Comfy and Auto1111 being gooeys (GUI) the "true" PC master race does command line generation from hugging face diffusers lol

I ended up using comfyUI but my understanding is huggingface adopted Gradio as the backend for building GUI's so Auto1111 gets all the new stuff a lot easier then Comfy
 
Last edited:
  • Like
Reactions: Jimwalrus

deadshots2842

Member
Apr 30, 2023
188
290
I have only ever trained one model on pony and I don't use it that often.

I would look through civitai for prompts as knowing an artist name is key to triggering the training data, along with the image scoring and general prompts



I'd have to assume that your replying to a joke comment as the 40 series was marketed toward gamers with "special cores" the actual gpu cores and stats are not much different from the 30 series (Baring DLSS and DLAA marketed time improvements).

With both Comfy and Auto1111 being gooeys (GUI) the "true" PC master race does command line generation from hugging face diffusers lol

I ended up using comfyUI but my understanding is huggingface adopted Gradio as the backend for building GUI's so Auto1111 gets all the new stuff a lot easier then Comfy
Am I supposed to train them or I saw a video where he got the prompt from photo. Can I just use those prompt and get my result or do I have to train??
 

felldude

Active Member
Aug 26, 2017
572
1,695
Am I supposed to train them or I saw a video where he got the prompt from photo. Can I just use those prompt and get my result or do I have to train??
You can get the prompts from the images, you don't have to train anything. Pony is already trained to certain artist.

An example from civitai

score_9, score_8_up, score_7_up, equestria_girls

Replace "equestria_girls" with any artist or category on rule34 and you have a start
 
  • Like
Reactions: deadshots2842

Sharinel

Active Member
Dec 23, 2018
598
2,511
Am I supposed to train them or I saw a video where he got the prompt from photo. Can I just use those prompt and get my result or do I have to train??
Also in addition to what felldude said, have a look here





Even if you don't use wildcards, you can use the .txt files as a database to see what charcters are incorporated into Pony without having to download loras
 

Sharinel

Active Member
Dec 23, 2018
598
2,511
Always experiment with your generations. Here's a 1.5 model generation on Dreamshaper8
01280.png

And here it is after I put it through a SDXL checkpoint for hires (a pony one no less, without all that score_9 shite)

01281.png

Just in case you weren't aware that you can mix'n'match 1.5 and SDXL/Pony in some cases. Here's what I used in Forge


1714774844896.png
 

EvylEve

Newbie
Apr 5, 2021
31
56
Always experiment with your generations. Here's a 1.5 model generation on Dreamshaper8
View attachment 3600573

And here it is after I put it through a SDXL checkpoint for hires (a pony one no less, without all that score_9 shite)

View attachment 3600575

Just in case you weren't aware that you can mix'n'match 1.5 and SDXL/Pony in some cases. Here's what I used in Forge


View attachment 3600577
Very nice result, mind a silly question ? Have you experimented just hires fix ? Never played with "Refine" ?
 

Sharinel

Active Member
Dec 23, 2018
598
2,511
Very nice result, mind a silly question ? Have you experimented just hires fix ? Never played with "Refine" ?
Refine doesn't work as well, From what I can work out, it is because the VAE would need to change midway through and there is no dropdown to choose it. I have my VAE set to automatic, which is why I suspect it works on hires as that is amending a completed image, but as refiner is amending an ongoing image it doesn't seem to work.

Although that's just me, maybe I'm missing something. Adetailer works fine though as it does have a VAE dropdown, and is very useful for changing the look of faces - a particular checkpoint can have a 'default' face, so just use another checkpoint to change it (also if you have a 1.5 Lora that you want to use the face in SDXL....)
 

devilkkw

Member
Mar 17, 2021
323
1,093
Always experiment with your generations. Here's a 1.5 model generation on Dreamshaper8
View attachment 3600573

And here it is after I put it through a SDXL checkpoint for hires (a pony one no less, without all that score_9 shite)

View attachment 3600575

Just in case you weren't aware that you can mix'n'match 1.5 and SDXL/Pony in some cases. Here's what I used in Forge


View attachment 3600577
Nice, some parts get really good details, but seem hires ruined the work, especially in the hands. have you try with lower denoise?
 

EvylEve

Newbie
Apr 5, 2021
31
56
Refine doesn't work as well, From what I can work out, it is because the VAE would need to change midway through and there is no dropdown to choose it. I have my VAE set to automatic, which is why I suspect it works on hires as that is amending a completed image, but as refiner is amending an ongoing image it doesn't seem to work.

Although that's just me, maybe I'm missing something. Adetailer works fine though as it does have a VAE dropdown, and is very useful for changing the look of faces - a particular checkpoint can have a 'default' face, so just use another checkpoint to change it (also if you have a 1.5 Lora that you want to use the face in SDXL....)
From my little experience I have to wholly agree with you, VAE is actually one of the issues I'm fighting with refining, the only semi-wa I've found so far, is using models with baked in VAE and let Stable Diffusion manage it via Automatic, yet sometimes I got messy blurbs of random noise/colors.

I'm actually trying to generate with anime/cartoonish style models (way easier to get what I really want), then try to make it look a bit more realistic through other steps, avoiding heavy inpainting or other Photoshop re-edits.

So far the only positive results I've got were throuh hires fix and with prompts from file, but it's still a long way to go.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
You can use refiner for SD1.5 models also, it's not just limited to XL models. This means you can use a cartoon oriented ckpt and then switch to a photo realistic with the refiner function. You can also do this with hirefix, meaning that you are using a cartoon ckpt as main model and then switching to a photo realistic in hiresfix. I made a post about this a while ago. I wish it was possible to select VAE in hiresfix and in refiner this way you could mix and match SD1.5 models with Xl models etc.
 

devilkkw

Member
Mar 17, 2021
323
1,093
A fully automated workflow for "image 2 text 2 image" or image 2 image"
kkw-automated.png

This workflow load image, if there's a prompt use it for make new image, if not use a blip model for give prompt.
I've made it simply as possible, you have only to selct some switch for generation.
It also have ability to load lora, and swap something in prompt.

image 2 text 2 image semple:
kkw-automated-i2t2i-_00002_.png
changing subject kkw-automated-i2t2i-_00003_.png

Image 2 Image sample (subject change):
foxy
kkw-automated-i2i-_00003_.png
dog kkw-automated-i2i-_00002_.png
cat kkw-automated-i2i-_00001_.png
tom cruise
kkw-automated-i2i-_00004_.png
emma watson
kkw-automated-i2i-_00005_.png

All image have workflow included.
Hope you like and experimenting with it.
 

felldude

Active Member
Aug 26, 2017
572
1,695
I have been working on a 2k Pony training.
It was around 10GB of training data around 1000 high quality images.

Here are some test post images if someone wanted to compare the results with their favorite pony model.

ComfyUI_00078_.png ComfyUI_00972_.png ComfyUI_00684_.png
 
  • Like
Reactions: JValkonian

JValkonian

Member
Nov 29, 2022
285
256
A fully automated workflow for "image 2 text 2 image" or image 2 image"
View attachment 3610374

This workflow load image, if there's a prompt use it for make new image, if not use a blip model for give prompt.
I've made it simply as possible, you have only to selct some switch for generation.
It also have ability to load lora, and swap something in prompt.

image 2 text 2 image semple:
View attachment 3610395
changing subject View attachment 3610388

Image 2 Image sample (subject change):
foxy
View attachment 3610392
dog View attachment 3610393
cat View attachment 3610394
tom cruise
View attachment 3610391
emma watson
View attachment 3610390

All image have workflow included.
Hope you like and experimenting with it.
 

mailpa

New Member
Sep 5, 2021
2
3
is there a better way to use loras in comfyui? I'm used to place a lora loader right next to the base model and link it to every sampler I use. Anyone tested other archtectures, like placing it only on a second sampler that finishes the image?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
is there a better way to use loras in comfyui? I'm used to place a lora loader right next to the base model and link it to every sampler I use. Anyone tested other archtectures, like placing it only on a second sampler that finishes the image?
Yes, I did that. In a way LORA changes the very first latent you get so much that not having it on your first latent produces very different results for the final image.
 
  • Like
Reactions: mailpa