[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
I forgot to mention that the image ratio format can have an impact on the general image quality. Most often it gives better result using portrait format image rather than a square 1:1 image. I recommend to use either 512x786 or 640x960. It's better if your pc can run the larger resolution. Clipskip 2 is very commonly used and also in training the models, it can have a profound effect on the image quality and also fix things such as multiple navals etc, highly recommended though not always.

00142-1633903613.png
This image demonstrates adetailer better, I also used clipskip 2.

00144-1633903613.png 00145-1633903613.png

I made a similar image to namhoang909´s prompt and in the 2nd I used the face portrait with reactor for the faceswap. If it's not to your taste this is only a demo.
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
There has been people asking about creating consistent characters and changing outfit etc.
I used the image in my previous post in open pose and changed the outfit prompt and again the face in reactor.
This is one solution, there are also others of course.

00149-4291493693.png 00150-4291493693.png 00008-1091328577.png
 
Last edited:

Sharinel

Active Member
Dec 23, 2018
611
2,566
I posted some more from the same batch , and a one, (below images, others on civitai):
View attachment 3191917 View attachment 3191918 View attachment 3191920 View attachment 3193037 View attachment 3193039 View attachment 3193046
I liked the look of these and tweaked them with my some of the styles I have saved. Some nice outcomes, posted png's so hopefully folks can see just how they differed from Picos

00235.png

00237.png 00238.png 00239.png 00240.png 00236.png
 

modine2021

Member
May 20, 2021
433
1,444
dont know what's the deal. but all my images coming out like this. no matter the model. even with default settings and simple prompts
 

onyx

Member
Aug 6, 2016
128
221
dont know what's the deal. but all my images coming out like this. no matter the model. even with default settings and simple prompts
1703398651776.png

Doesnt seem to be an issues with your prompts (generated using png info > send to txt2img), have you checked the extensions tab for updates?
 

me3

Member
Dec 31, 2016
316
708
Been testing some upscaling and "post processing", i guess it could be called.
I "stolen" a image from this post, so credit etc for original image should go there, to see how things worked for img2img.
Been having some horrible OOM errors which really doesn't make sense, IE failing to use 400mb more when it's only using 4 out of 6 gb...so things have gone even slower than usual :/
Anyway, just some results, really need to find a way to fix those hands, nothing really worked so far, but it's sd15 so might need to do some more scaling and maybe double masking

temp_qaibg_00005_.jpg
temp_rsked_00007_.jpg

Need to work out the memory issues first though which is really annoying. Samplers are running really slow, not overflowing, but still deciding to run at 60-140 s/it..tiled vae OOM without even hitting the vram limit...getting OOM when samplers have been running fine for the previous steps with or without overflowing, then all of the sudden it OOM in the next step. Run/continue the prompt and it works fine. Really think they broke something somewhere

Edit: Adding json workflow file
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Please can you attach the workflow file?
Since i'm doing alot of testing and trying to figure out which nodes to use and if there's any settings i can keep "static", it's safe to say it's a huge mess of similar nodes connected to the same things and a lot of bypassing and "wiring". I'll try to find some time to clean up much of the mess and post that.

If you want a "basic" run down of things though to have a go at something similar;
  • Create or load a base image
  • (optional) upscale image/latent slightly
  • add noise/denoise slightly and run through sampler using the same model and prompt as you create the image with
  • add nose/denoise at a lower amount than in the step above, sample with same prompt by different model
  • (optional) upscale image/latent slightly
  • last "stage" uses a third model and it's own prompt to clean up, highlight, style, detail or whatever you want to accomplish. So the sampling can either be a finish with just the last few steps or adding a bit of denoising and running that.
How much difference you see between each step is gonna depend massively on your prompt, models and style of things. Using the two images i posted, they are both just different "last stage" samplings. One being just a few step "touch up" while the other is using some denoising.
For me it takes 25-30min for a full run, with two of the models being sd15, could probably cut that down by removing some of the vae nodes, but when you want/need to keep track of changes they are kind required.

I've included 3 images from my last test just to show the stages, it's not that good an example as the changes in the last two images isn't all that large, might need to keep flipping between them to really see. #1 is the base image.

_ex_1.jpg _ex_2.jpg _ex_3.jpg

Edit:
workflow file with nodes etc added to this post
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Do post the prompt as is. I post mine as they are with all bits and blobs sticks here and there. It takes only one person to start breaking thread's rules before the avalanche is unleashed. So, for the sake of not making the precedent, please do post the actual workflow.
I've added a somewhat cleaned up json (zipped as it can't upload otherwise) to the post with the two first images and linked in the other post.
The json has the same nodes, linked the same way/order etc as what was used for all the posted images. Exact settings for steps, which models etc i have no idea what was used in each case. If you check the name for the first two images you might notice that they are comfyui temp files which gets wiped on restarting comfy, so i don't even have them any more.
I think the nodes are limited to base, WAS and comfyroll.
Note that this is setup for loading a base image, so that small bit can simply be replaced with a pretty much just a empty latent and sampler etc, if anyone want to use it "directly". Also top chechpoint loaded (and prompts) are set up for sdxl, so that needs to be tweaked if anyone want to use just sd1.5
 

namhoang909

Newbie
Apr 22, 2017
89
48
I have come across a few image like this on Civitai, while it has detected certain lora used but I don't see the usual lora format <lora_name: strength> in the prompt, how is that possible?
You don't have permission to view the spoiler content. Log in or register now.
1703657024275.png
 
Last edited: