[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
That UI looks way worse than the default experience you get lol, I'm assuming you can add on to it but if not that's an entirely pointless design IMO.
Pointless, no. Different, yes, and much more "open" in what you can do, but YOU have to do it.
One is a UI made for simple usage basic and some more advanced things that's aimed at everyone, but you're limited to those options.
The other it's much more up to you. Like in a1111 you can't really "chain" rendering one image on different models, as an example.

It'll suite most needs just fine and do the job in most cases and cover ppls need and "introduction" to ai generation.
I guess an analogy might be, a WYSIWYG editors/software work great, but sometimes you need or want to write the "code" directly
 
  • Like
Reactions: Mr-Fox

devilkkw

Member
Mar 17, 2021
323
1,093
That UI looks way worse than the default experience you get lol, I'm assuming you can add on to it but if not that's an entirely pointless design IMO.
the image is taken on default open, you can add-remove option by right click, and see there are many option to check.
I don't have official comfyUI and this is a simple way for me to check how it work. Maybe in future i switch to it, but actually i like to much a1111, or to be honest i have better experience on it.
 
  • Like
Reactions: Sepheyer and Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
View attachment 2772649
Widescreen with little/no repeats (y)
Single subject (y)
Standing subject (y)
Job done :unsure:

:p
I forgot to say that I have managed to get a widescreen image with the subject standing and without any duplicates but the subject were always in the distance. I achieved this with an extension with regional prompting that I tried. Imagine a glamour shot photo of a beautiful woman walking down a tropical beach in a skimpy bikini, now this would be awesome in widescreen but you wouldn't want her to be far away. I have not found a way yet to achieve this without doubles or the subject lying down or being far away or any other weirdness. It's not a specific image that I'm after though but a composition.

Example:
1689453281632.png
1569185-1311x737-[DesktopNexus.com].jpg
 

Dagg0th

Member
Jan 20, 2022
279
2,746
I forgot to say that I have managed to get a widescreen image with the subject standing and without any duplicates but the subject were always in the distance. I achieved this with an extension with regional prompting that I tried. Imagine a glamour shot photo of a beautiful woman walking down a tropical beach in a skimpy bikini, now this would be awesome in widescreen but you wouldn't want her to be far away. I have not found a way yet to achieve this without doubles or the subject lying down or being far away or any other weirdness. It's not a specific image that I'm after though but a composition.

Example:
View attachment 2773257
View attachment 2773267
have you tried using controlnet inpaint, to outpaint a image?

00000-2166272214.jpg

00008-395219060.jpg
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
have you tried using controlnet inpaint?

View attachment 2773377

View attachment 2773387
Awesome!:love:
No I have not messed with control net at all. It's on my to do list for sure. Thank you this looks very promising.
How is the workflow? Do you render an image with txt2img first and then use the control net inpaint?
As everyone has already heard to death, I'm very fond of hiresfix. Is there a way to incorporate it in the process somehow?
 
  • Like
Reactions: Sepheyer

Dagg0th

Member
Jan 20, 2022
279
2,746
Awesome!:love:
No I have not messed with control net at all. It's on my to do list for sure. Thank you this looks very promising.
How is the workflow? Do you render an image with txt2img first and then use the control net inpaint?
As everyone has already heard to death, I'm very fond of hiresfix. Is there a way to incorporate it in the process somehow?
Yes, you render your image first from txt2img, then img2img.

I'm also very fond of using Hires.fix, but in this case it's not possible, because you have to render an image twice as big as the default 512 or 768, which most models use.

first create your image with a simple prompt:
"glamour shot photo of a beautiful woman walking down a tropical beach in a skimpy bikini"
512 x 768
00000-2166272214.jpg

After that you send it to img2img:
Put the image in controlnet an activate inpaint using impaint+lama as preprocessor, it's very important to put resize mode to "Resize and fill".

use a denoise between 0.75 to 1 and use your favorite widescreen size, in this case a 16:9
Screenshot_15.jpg

This will create an image from 512x768 to 1368x768 (16:9)

that is why is not possible to use Hires, because it will try to create from a 1024x1536 (if using x2 hires) to 2048x1536, it will take forever to render and the result will be terrible, I tried.

Probably you will have to try a few times before you get a satisfying image,

this was my first try using denoise 0.75.. result was not good

the final one I use a denoise of 0.9, but still have to use inpaint to fix any error

After that, you can use your favorite upscaler and do a final inpaint retouch
00011-3034686198.jpg
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Outpainting is most likely the way to go IF you have a background that allows for it. If there's specific patterns or elements you might need to do very small "additions" each time, with more general "easy" backgrounds you might get away with doing almost full image additions.

Previous image i posted was kinda meant as joke as i knew it wasn't what you were after but it was an easy generation that fit the conditions (hence the smiles).
So i kept these two on hand as the basics are in the prompt, you just need to work with it a bit more
00012-1847212736.png
00022-1847212736.png
 
Last edited:
  • Red Heart
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Yes, you render your image first from txt2img, then img2img.

I'm also very fond of using Hires.fix, but in this case it's not possible, because you have to render an image twice as big as the default 512 or 768, which most models use.

first create your image with a simple prompt:
"glamour shot photo of a beautiful woman walking down a tropical beach in a skimpy bikini"
512 x 768
View attachment 2773438

After that you send it to img2img:
Put the image in controlnet an activate inpaint using impaint+lama as preprocessor, it's very important to put resize mode to "Resize and fill".

use a denoise between 0.75 to 1 and use your favorite widescreen size, in this case a 16:9
View attachment 2773452

This will create an image from 512x768 to 1368x768 (16:9)

that is why is not possible to use Hires, because it will try to create from a 1024x1536 (if using x2 hires) to 2048x1536, it will take forever to render and the result will be terrible, I tried.

Probably you will have to try a few times before you get a satisfying image,

this was my first try:

use inpaint to fix any error

After that, you can use your favorite upscaler and do a final inpaint retouch
View attachment 2773495
Thank you very much for the help. Red Heart1.jpg I will look into this.
 
  • Like
Reactions: Dagg0th

sharlotte

Member
Jan 10, 2019
298
1,586
Dagg0th that's a great post - should be added to the first page. I saw a video on this, again by S Kamph: , which is also very valuable.

Meantime, i've tried to see what I could do, panorama wise based on your prompt me3 (I did some modification on the sampling and some other sliders) , with my GC (not boasting, could only afford a RTX3060 (a MSI Ventus) with 12gb vram) - and added Mr-Fox favourite Hi-res onto it. Here are some of the results I got, no rework and I can do 4096*1024 (have not tried to push the height further):
00000-1847212736-(8k, best quality, masterpiece_1.2),  ultra-detailed,   A widescreen photogra...png 00002-1847212736-(8k, best quality, masterpiece_1.2),  ultra-detailed,   A widescreen photogra...png 00003-1847212736-(8k, best quality, masterpiece_1.2),  ultra-detailed,   A widescreen photogra...png
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
Dagg0th that's a great post - should be added to the first page. I saw a video on this, again by S Kamph: , which is also very valuable.

Meantime, i've tried to see what I could do, panorama wise based on your prompt me3 (I did some modification on the sampling and some other sliders) , with my GC (not boasting, could only afford a RTX3060 (a MSI Ventus) with 12gb vram) - and added Mr-Fox favourite Hi-res onto it. Here are some of the results I got, no rework and I can do 4096*1024 (have not tried to push the height further):
View attachment 2774257 View attachment 2774258 View attachment 2774259
Good call. I added that OG's post into the links section.
 

me3

Member
Dec 31, 2016
316
708
I can't check the prompts in sharlotte's images right now so not sure if it's already in there, but in a basic way the delayed "subject" gets put into "empty" spots in the generation. Keep in mind that's empty for the AI in that specific step, not what we would consider empty space in the finished image. So a simple -/+ 1 in steps can change quite a bit.
Also, layering the image can give some nice results. IE start drawing the background, then the subject and finish off with including the foreground.

The two "mushroom" images i posted here uses some of it if i remember correctly.
 

sharlotte

Member
Jan 10, 2019
298
1,586
My apologies if this has been covered already (I did a search but could not find it). I started messing around with latent couple.
So first installed the two extensions as per the below ( and make sure to select the , there's another one but it does not have the ability to add a sketch).
latentcontrol0.JPG

Once done, you may need to restart the gui.
In my case, I just used paint and created a frame like this Untitled.png making sure that each 'square' (you can use whatever you want) is coloured in different colours (important for the latent couple to identify the different sections later).
Once that's done, in your text to image, enable both composable lora and latent couple and upload your sketch like so: latentcontrol1.JPG

Once done, click on 'i've finished my sketch' and you will see an area for the general prompt and for each sub-prompt relating to each coloured section you defined in your sketch. Fill them all in with the required info as per the below: latentcontrol2.JPG

and once done, click on 'prompt info update'.
This will populate your positive prompt and you're ready to go.
latentcontrol4.JPG
I generated the below, using hi-res (png contains generative info as usual). It took a while (less than 30 secs without hi-res, close to 40 minutes with hires (...)) but that maybe my prompts and selections (i'll test more to check that).

on the below (hires) I may have wanted to add some negative prompts (forgot as I was excited to try it out....)
1664058566-a masterpiece photograph of a beautiful winter sunrise in snow covered mountains, 8...png

Below without hires:
2095242074-a masterpiece photograph of a beautiful winter sunrise in snow covered mountains, 8...png 2095242075-a masterpiece photograph of a beautiful winter sunrise in snow covered mountains, 8...png

Sorry for the long post but wanted to share ;)
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I would start my search by looking at Lora's or Ti's on civitai. As for tools/extensions, controlnet openpose is your best bet.
Sebastian Kamph Tutorial, new controlnet 1.1:
(not gun related but it's a start).
I'm thinking that if you can pose the subject and add a gun in the prompt hopefully SD can connect them together in the image
There is also things like regional prompting and latent couple, maybe this is necessary for giving SD some help. It is extensions that tells SD where in the image the prompt is relevant. So potentially you could place the gun in the hand of the subject this way.
On civitai there are ready made poses for the openpose editor. This is also something to look into. I would search for action poses.
 
Last edited: