[Stable Diffusion] Prompt Sharing and Learning Thread

Thalies

New Member
Sep 24, 2017
13
50
I saw that you are interested in focus. I'm not sure in what context or terms you meant, but I have a few tips.
You can use "focus" for the composition example "ass focus". This will generate images with the framing being centered on the bottom midsection from behind or slightly from the side. You can use " hips focus", this will do the same but more from the front. That's one way of using focus. Then you can say "sharp focus" or "soft focus", "front focus" (speculating a little), "background focus" etc. When using simply "sharp focus", it means the lens focus on the subject, mainly the face but also the body. I have not experimented so much with a soft focus or an artistic selective focus on the background and have the subject partially unfocused. But it's something to try. I typically use "focus" to reinforce the composition I have in mind and use it in combination with either "cowboy shot", "half body shot" or "full body shot". Then the "body part focus" will guide the camera in on the part you wish to be in the center focus. This can give more interesting images. If you combine it with a style of photography such as "action photography", "lifestyle" or "documentary" etc. This will also have an effect on the composition.
I use "beauty photography" often and when I want a more analoge or grainy image I simply use "large format beauty photography" as an example. In that case "large format" is what gives the filmgrain.
You can use different variants of "depth of field" to enhance or reinforce the focus you are after. I use mostly regular "depth of field" but sometimes "shallow depth of field". I like how it makes the subject stand out more from the backdrop. It can sometimes almost create a 3d effect.
I'm sorry for not expressing myself clearly. By Fooocus, I meant this:
 

modine2021

Member
May 20, 2021
417
1,389
I saw that you are interested in focus. I'm not sure in what context or terms you meant, but I have a few tips.
You can use "focus" for the composition example "ass focus". This will generate images with the framing being centered on the bottom midsection from behind or slightly from the side. You can use " hips focus", this will do the same but more from the front. That's one way of using focus. Then you can say "sharp focus" or "soft focus", "front focus" (speculating a little), "background focus" etc. When using simply "sharp focus", it means the lens focus on the subject, mainly the face but also the body. I have not experimented so much with a soft focus or an artistic selective focus on the background and have the subject partially unfocused. But it's something to try. I typically use "focus" to reinforce the composition I have in mind and use it in combination with either "cowboy shot", "half body shot" or "full body shot". Then the "body part focus" will guide the camera in on the part you wish to be in the center focus. This can give more interesting images. If you combine it with a style of photography such as "action photography", "lifestyle" or "documentary" etc. This will also have an effect on the composition.
I use "beauty photography" often and when I want a more analoge or grainy image I simply use "large format beauty photography" as an example. In that case "large format" is what gives the filmgrain.
You can use different variants of "depth of field" to enhance or reinforce the focus you are after. I use mostly regular "depth of field" but sometimes "shallow depth of field". I like how it makes the subject stand out more from the backdrop. It can sometimes almost create a 3d effect.
hmm.. learn something new everyday (y)(y)(y)
 
  • Like
Reactions: Mr-Fox

felldude

Active Member
Aug 26, 2017
572
1,695
I'll throw out that for a photo realistic but blurry or almost green screened effect the LCM sampler can produce high quality images. LCM can also make a smooth almost 3D looking image when doing image to image without negative prompts just lower the config scale.

The DDPM sampler is my go to now for photoreal even over the new huenpp2.

I can stress the importance of adjusting the config scaling as some samplers will be horrible at the default 8

also...



Anus Helper.jpg
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I'll throw out that for a photo realistic but blurry or almost green screened effect the LCM sampler can produce high quality images. LCM can also make a smooth almost 3D looking image when doing image to image without negative prompts just lower the config scale.

The DDPM sampler is my go to now for photoreal even over the new huenpp2.

I can stress the importance of adjusting the config scaling as some samplers will be horrible at the default 8

also...



View attachment 3293599
We all sure need an anus.. :LOL: Can I get some help with mine?:ROFLMAO:
 

Lun@

Member
Dec 27, 2023
249
1,501
I'm so glad I stumbled upon this thread. I just started playing around with Stable Diffusion a couple of days ago and have kind of got addicted to trying to get beautiful images from it. I'm using img2img and have been experimenting with things like bokeh etc.

My biggest problem isn't the extra limbs/duplicate heads anymore since I filled out the negative keywords more thoroughly and kept size to 512 x 512, it's the eyes. Out of 20 images, only a couple will have normalish eyes, the rest are pretty much mutant or like runny eggs :oops:

I started using inpaint and doing a mask over the eyes which has helped a bit but not completely. The link for the beginners guide stuff is great, I'm going to read through all that.

Have I missed something or is there a way to save all your settings in the Web UI without having to fill the keywords etc. in every time I launch it?
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
" Fooocus is a rethinking of Stable Diffusion and Midjourney’s designs:

  • Learned from Stable Diffusion, the software is offline, open source, and free.
  • Learned from Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images. "

    Yeah if only... I'm highly sceptical. I don't believe any software is there yet that it doesn't need a helping hand.
    It's not a hassle to manually tweaking settings etc, it's more control. You can be very creative with A1111 thanks to all the control you have.
 
  • Like
Reactions: DD3DD and Sepheyer

Thalies

New Member
Sep 24, 2017
13
50
" Fooocus is a rethinking of Stable Diffusion and Midjourney’s designs:

  • Learned from Stable Diffusion, the software is offline, open source, and free.
  • Learned from Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images. "

    Yeah if only... I'm highly sceptical. I don't believe any software is there yet that it doesn't need a helping hand.
    It's not a hassle to manually tweaking settings etc, it's more control. You can be very creative with A1111 thanks to all the control you have.
I understand the concerns about the need for manual adjustments in image generation software, but from my experience as a beginner, Fooocus has been refreshingly easy to use. It's not as overwhelming as other tools I've tried. The upscale, inpaint, and outpaint functions are particularly user-friendly and have helped me a lot. 2024-01-24_14-29-47_6382.png
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,047
4,002
I'm so glad I stumbled upon this thread. I just started playing around with Stable Diffusion a couple of days ago and have kind of got addicted to trying to get beautiful images from it. I'm using img2img and have been experimenting with things like bokeh etc.

My biggest problem isn't the extra limbs/duplicate heads anymore since I filled out the negative keywords more thoroughly and kept size to 512 x 512, it's the eyes. Out of 20 images, only a couple will have normalish eyes, the rest are pretty much mutant or like runny eggs :oops:

I started using inpaint and doing a mask over the eyes which has helped a bit but not completely. The link for the beginners guide stuff is great, I'm going to read through all that.

Have I missed something or is there a way to save all your settings in the Web UI without having to fill the keywords etc. in every time I launch it?
Are you using Hires.fix? If not, it's best to switch that on, even if you only upscale a small amount (~10% or so). It basically gives SD another go at generating, honing the result further. For photorealistic use a denoising strength between 0.25 and 0.33. Cartoon/anime can go a little higher.
Use ~1.5x the number of Sampling Steps i.e. if you have 30 sampling steps, use 45 Hires steps.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I'm so glad I stumbled upon this thread. I just started playing around with Stable Diffusion a couple of days ago and have kind of got addicted to trying to get beautiful images from it. I'm using img2img and have been experimenting with things like bokeh etc.

My biggest problem isn't the extra limbs/duplicate heads anymore since I filled out the negative keywords more thoroughly and kept size to 512 x 512, it's the eyes. Out of 20 images, only a couple will have normalish eyes, the rest are pretty much mutant or like runny eggs :oops:

I started using inpaint and doing a mask over the eyes which has helped a bit but not completely. The link for the beginners guide stuff is great, I'm going to read through all that.

Have I missed something or is there a way to save all your settings in the Web UI without having to fill the keywords etc. in every time I launch it?
It's easier for us to help if you post an image so we can see what you are working with. In the output folders grab the png file and upload it here. We will load it into png Info and can then see all the settings.
Can you explain more, why only img2img?
If your GPU can handle it, use at least 512x768. SD does better in general with portrait ratio, though it can do lanscape but it's not as easy. I would recommend 640x960. The higher the resolution from the start the more detail you can get in the end result regardless of any steps you take after the initial generation.
A tip is to use postprocessing GFPGAN. This helps with the eyes and the face. Don't confuse it with face restoration though.
The next thing is to use after detailer with the "mediapipe_face_mesh_eyes_only" model. I typically use inpaint denoising strength 0.22 .

You can save prompts as "style" here:

Styles.png
Or you can load the last prompt and settings with the diagonal white arrow with blue background, or do as I described above. Load a png file with png Info and send to txt2img or img2img.

 

Lun@

Member
Dec 27, 2023
249
1,501
Are you using Hires.fix? If not, it's best to switch that on, even if you only upscale a small amount (~10% or so). It basically gives SD another go at generating, honing the result further. For photorealistic use a denoising strength between 0.25 and 0.33. Cartoon/anime can go a little higher.
Use ~1.5x the number of Sampling Steps i.e. if you have 30 sampling steps, use 45 Hires steps.
Oh, I wasn't using Hires.fix, will try that! my denoiser strength was too high as well, I had it at 0.7
Sampling was on at 40.

Thanks for the tips! it's quite addictive seeing how good you can get the images :)
 

Lun@

Member
Dec 27, 2023
249
1,501
It's easier for us to help if you post an image so we can see what you are working with. In the output folders grab the png file and upload it here. We will load it into png Info and can then see all the settings.
Can you explain more, why only img2img?
If your GPU can handle it, use at least 512x768. SD does better in general with portrait ratio, though it can do lanscape but it's not as easy. I would recommend 640x960. The higher the resolution from the start the more detail you can get in the end result regardless of any steps you take after the initial generation.
A tip is to use postprocessing GFPGAN. This helps with the eyes and the face. Don't confuse it with face restoration though.
The next thing is to use after detailer with the "mediapipe_face_mesh_eyes_only" model. I typically use inpaint denoising strength 0.22 .

You can save prompts as "style" here:

View attachment 3293829
Or you can load the last prompt and settings with the diagonal white arrow with blue background, or do as I described above. Load a png file with png Info and send to txt2img or img2img.

This was one of my images from the output folder:

00128-4232349330.png

I was using img2img because I wanted to generate a whole bunch of variations from a single image. I'm really new to this stuff, so I'm really at the basic level :)

My gfx card is an rtx 3090 so it should be fine.

Ah thats how I save it, thanks!
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
I'm so glad I stumbled upon this thread. I just started playing around with Stable Diffusion a couple of days ago and have kind of got addicted to trying to get beautiful images from it. I'm using img2img and have been experimenting with things like bokeh etc.

My biggest problem isn't the extra limbs/duplicate heads anymore since I filled out the negative keywords more thoroughly and kept size to 512 x 512, it's the eyes. Out of 20 images, only a couple will have normalish eyes, the rest are pretty much mutant or like runny eggs :oops:

I started using inpaint and doing a mask over the eyes which has helped a bit but not completely. The link for the beginners guide stuff is great, I'm going to read through all that.

Have I missed something or is there a way to save all your settings in the Web UI without having to fill the keywords etc. in every time I launch it?
For image to image the size of the canvas matters a lot. If you start with a 512x512 with a fullshot image you want to upscale the latent (note, not the image, but the latent) a few times (say once by 1.5 and then one more time by 1.5 bringing it to 1152) so you can get the face rendered correctly.

In a sense this is a well known problem (i.e. anyone who tried doing what you are doing ran into it, including myself). And the least frustrating approach to resolving it is to switch to ComfyUI so you can clearly see what your workflow is and then use one of the i2i workflows that are posted througout this thread. I think the bulk of these workflows is around September of 2023 when the ComfyUI went mainstream.

Here's an illustration of how i2i takes the small image and scales it up:
cui.png

You don't have permission to view the spoiler content. Log in or register now.
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,047
4,002
For image to image the size of the canvas matters a lot. If you start with a 512x512 with a fullshot image you want to upscale the latent (note, not the image, but the latent) a few times (say once by 1.5 and then one more time by 1.5 bringing it to 1152) so you can get the face rendered correctly.

In a sense this is a well known problem (i.e. anyone who tried doing what you are doing ran into it, including myself). And the least frustrating approach to resolving it is to switch to ComfyUI so you can clearly see what your workflow is and then use one of the i2i workflows that are posted througout this thread. I think the bulks of these workflows are around September of 2022 when the ComfyUI went mainstream.

Here's an illustration of how i2i takes the small image and scales it up:
View attachment 3293874

You don't have permission to view the spoiler content. Log in or register now.
"Why not try ComfyUI?"
"ComfyUI can fix that"
"Go on, try ComfyUI..."

Don't switch! You'll go mad and all your dreams will be of spaghetti and string! ;)
 

Lun@

Member
Dec 27, 2023
249
1,501
For image to image the size of the canvas matters a lot. If you start with a 512x512 with a fullshot image you want to upscale the latent (note, not the image, but the latent) a few times (say once by 1.5 and then one more time by 1.5 bringing it to 1152) so you can get the face rendered correctly.

In a sense this is a well known problem (i.e. anyone who tried doing what you are doing ran into it, including myself). And the least frustrating approach to resolving it is to switch to ComfyUI so you can clearly see what your workflow is and then use one of the i2i workflows that are posted througout this thread. I think the bulks of these workflows are around September of 2022 when the ComfyUI went mainstream.

Here's an illustration of how i2i takes the small image and scales it up:
View attachment 3293874

You don't have permission to view the spoiler content. Log in or register now.
Wow, a lot of great tips from you guys and I have a good bit of reading up still to do it seems ;)

I'm going to compile these tips into a document for my reference while I try these suggestions.

Thanks for this info!
 

Lun@

Member
Dec 27, 2023
249
1,501
"Why not try ComfyUI?"
"ComfyUI can fix that"
"Go on, try ComfyUI..."

Don't switch! You'll go mad and all your dreams will be of spaghetti and string! ;)
I'm just trying to get to grips with one Ui as it is..:cautious:

I have to go play with my font of new found knowledge before my dreams become spaghetti and string :oops:
 
  • Like
Reactions: Jimwalrus

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
For image to image the size of the canvas matters a lot. If you start with a 512x512 with a fullshot image you want to upscale the latent (note, not the image, but the latent) a few times (say once by 1.5 and then one more time by 1.5 bringing it to 1152) so you can get the face rendered correctly.

In a sense this is a well known problem (i.e. anyone who tried doing what you are doing ran into it, including myself). And the least frustrating approach to resolving it is to switch to ComfyUI so you can clearly see what your workflow is and then use one of the i2i workflows that are posted througout this thread. I think the bulk of these workflows is around September of 2023 when the ComfyUI went mainstream.

Here's an illustration of how i2i takes the small image and scales it up:
View attachment 3293874

You don't have permission to view the spoiler content. Log in or register now.
Have you heard of the good news? Your lord and savior spagetti the almighty have come to earth. Now you can generate clear too.. And save your Thetan. Just add a this and that node and a handfull of the other plugin, just watch out for those pesky suppressives.. :p
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
This was one of my images from the output folder:

View attachment 3293865

I was using img2img because I wanted to generate a whole bunch of variations from a single image. I'm really new to this stuff, so I'm really at the basic level :)

My gfx card is an rtx 3090 so it should be fine.

Ah thats how I save it, thanks!
You can use controlnet with openpose and simply switch checkpoint model and change a few prompt tags for different backdrops and style etc. There are many ways to generate variations on the same image.
I love this "look". Very "gothic" or "black metal". I see what you mean about the eyes.
 

Lun@

Member
Dec 27, 2023
249
1,501
You can use controlnet with openpose and simply switch checkpoint model and change a few prompt tags for different backdrops and style etc. There are many ways to generate variations on the same image.
I love this "look". Very "gothic" or "black metal". I see what you mean about the eyes.
Yeah I was doing batch files of 20 and getting maybe 1 or 2 decentish ones with my limited knowledge.

Thanks! this was my favourite of them all:

6530381.png

Far from perfect but the overall look was nice :)