[Stable Diffusion] Prompt Sharing and Learning Thread

namhoang909

Newbie
Apr 22, 2017
89
48
I have some questions,
1) Is there a setting that makes A1111 not save the image, like the preview node in ComfyUI(CUI), it is annoying trying out some random prompt then delete all the saved pictures.
2) I have both CUI, and A1111, and all the resources on HDD, it took quite some time to change from one checkpoint to another, should I move the installed UI file or the resources(checkpoint) to SSD for faster change between checkpoints?
3) Is there a custom node or something that makes the embedding using in CUI easier?
Thanks in advance.
 
  • Like
Reactions: DD3DD and Sepheyer

Synalon

Member
Jan 31, 2022
225
665
I have some questions,
1) Is there a setting that makes A1111 not save the image, like the preview node in ComfyUI(CUI), it is annoying trying out some random prompt then delete all the saved pictures.
2) I have both CUI, and A1111, and all the resources on HDD, it took quite some time to change from one checkpoint to another, should I move the installed UI file or the resources(checkpoint) to SSD for faster change between checkpoints?
3) Is there a custom node or something that makes the embedding using in CUI easier?
Thanks in advance.
If you want to turn off the autosave and just save the images you want you can do that here

You don't have permission to view the spoiler content. Log in or register now.


If you turn off autosave you will have to save the pictures you want to keep personally by using the buttons here.

You don't have permission to view the spoiler content. Log in or register now.

There is a button to save 1 image and a button to save them all as a rar file.

As to moving the installation to a faster drive I have my installation on an NVME, and the more checkpoints I have in the folder the slower the startup, but the change between checkpoints has taken pretty much the same amount of time as when I had it installed on a ssd.

It might be worth just having the checkpoints you intend to use for a project in the installation folder, and keep the others in another folder. If you want to use one that wasn't in the folder when you started the project but don't want to turn automatic off you can just add it to the installation folder and click the checkpoint refresh button.

For CUI I have no idea how to do anything other than setting up a basic workflow for SD1.5 and XL, I don't know how to dd embeddings, loras or anything else.

Supposedly you can add custom nodes in the manager and it will automatically find them when you load somebody elses picture but that always fails for me.
 

devilkkw

Member
Mar 17, 2021
327
1,113
I have some questions,
1) Is there a setting that makes A1111 not save the image, like the preview node in ComfyUI(CUI), it is annoying trying out some random prompt then delete all the saved pictures.
2) I have both CUI, and A1111, and all the resources on HDD, it took quite some time to change from one checkpoint to another, should I move the installed UI file or the resources(checkpoint) to SSD for faster change between checkpoints?
3) Is there a custom node or something that makes the embedding using in CUI easier?
Thanks in advance.
1) Synalon answer is good
2)No, the load speed is not about HDD, but depends on size of checkpoint and on how much VRAM you have. If you want load speed i suggest to use pruned model(all pruned model are around 2Gb size) tath speed up loading. Also CUI is faster in loading than A1111, same checkpoint (in my case 10Gb model) take 18 sec to load in a1111 and 8sec to load in CUI.
3) allow you prompt easy, also allow use of wildcards like a1111 (__wildcard__) and load embedding is much easy than you think, jus use embedding:embeddingname. It also accept weight like embedding:embeddingname:0.8.

A note on embedding: CUI seem work better with it, in a1111 weight it is difficult, adding embedding at .5 or lower value, give crap result for me. in CUI no, weight to .0005 to 2.0 and have no issue like a1111.

Also i suggest to install and manage all custom nodes and update for CUI.
 

me3

Member
Dec 31, 2016
316
708
...
As to moving the installation to a faster drive I have my installation on an NVME, and the more checkpoints I have in the folder the slower the startup, but the change between checkpoints has taken pretty much the same amount of time as when I had it installed on a ssd.
...
Unless they've improved this "mess" in recent updates, there's a cache file in a1111 that is used for model and lora hashes and other details. The "problem" with this file is that it doesn't really clean itself so if we have used many different models that's been deleted or been testing a lot of lora trainings with different filenames, this file will be VERY bloated and slows down startup. It could be worth checking, if deleted the first startup should rescan for hashes but be faster later. Again, don't know if this as been fixed/changed recently.
 

me3

Member
Dec 31, 2016
316
708
Indeed, I tried changing the settings up, but nothing really gave a consistent output :(
Have you tried using the skin model for segs detector and run that through a detailer?
One pass might not be enough so it could be worth doing a couple of cycles at decreasing noise injection and/or denoise.

This is a very rushed run and can probably be improved upon in many ways and i don't know if this is in the ballpark of what you're looking for.
Sorry the images are just to show the change, but the nodes are just basic segs and detailer, model can be found in the comfyui manager model download.

PB-_temp_uyjuo_00001_.png PB-_temp_rzolq_00001_.png
 

devilkkw

Member
Mar 17, 2021
327
1,113
Here's an interesting case that I was never able to replicate.

The most striking feature here is the skin - but this is achieved incidentally. The aDetailer grabbed the entire figure as hand and retextured it.

Anyone knows how to do this consistently?

View attachment 3273399
Have you try with my ? It's worked for giving skin realistic look, maybe give it a chance.
I'm currently testing on CUI, because embedding work different on a1111 and CUI. in CUI is more weightable than a1111, giving good result from light to heavy strenght. oblivius is prompt depending, so adding it in good prompt position is the trick.
In mostly case use it like:
prompt, skindetail, other embedding and lora.
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
Have you try with my ? It's worked for giving skin realistic look, maybe give it a chance.
I'm currently testing on CUI, because embedding work different on a1111 and CUI. in CUI is more weightable than a1111, giving good result from light to heavy strenght. oblivius is prompt depending, so adding it in good prompt position is the trick.
In mostly case use it like:
prompt, skindetail, other embedding and lora.
Can (recommend:1.5), I use it all the time - in A1111 I usually weight it at 0.6-0.8
 

modine2021

Member
May 20, 2021
433
1,444
anyone know how to STOP InvokeAI from scanning for models every time it starts up? damn annoying. casue it takes forever. why cant it remember a previous scan?? and only add new models?? now i remember why i stopped using it :cautious:
 
Last edited:

namhoang909

Newbie
Apr 22, 2017
89
48
3) allow you prompt easy, also allow use of wildcards like a1111 (__wildcard__) and load embedding is much easy than you think, jus use embedding:embeddingname. It also accept weight like embedding:embeddingname:0.8.

A note on embedding: CUI seem work better with it, in a1111 weight it is difficult, adding embedding at .5 or lower value, give crap result for me. in CUI no, weight to .0005 to 2.0 and have no issue like a1111.

Also i suggest to install and manage all custom nodes and update for CUI.
I am actually looking for a node that display all available embeddings like maybe a drop-down list. Copying & pasting filename from explorer windows feels tedious sometimes 1705894662213.png
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Continuing the battle for animated images without using any of the extension/nodes for it...
It's a rather small step, in more ways than one, but it's at least fair consistent

View attachment test.webp

Prompt for the base image is the same as in this post
 
Last edited:
  • Like
Reactions: namhoang909

namhoang909

Newbie
Apr 22, 2017
89
48
You don't have permission to view the spoiler content. Log in or register now.
Are there better hand models for Adetailer in A1111 or extension in ComfyUI or A1111 that could fix the hands(fingers)?
1705981297973.png
And what settings would you apply to make the image more realistic or artistic(like a studio photo)?
 
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
You don't have permission to view the spoiler content. Log in or register now.
Are there better hand models for Adetailer in A1111 or extension in ComfyUI or A1111 that could fix the hands(fingers)?
View attachment 3288384
And what settings would you apply to make the image more realistic or artistic(like a studio photo)?
Hands in SD is a huge issue regardless which UI being used. After detailer only has a couple hand models afaik atm. I have tried both hand_yolov8n.pt, hand_yolov8s.pt. The "n" model gives better result for me, I have never had any success with the "s" model. I'm sure new and better models will come eventually. I find that I get better results fixing the hands with after detailer in inpaint img2img rather than using after detailer hand model while generating the image. You can also use controlnet for the hands, though I have not tried this myself yet.
I have made posts about fixing hands.

Inpaint with after detailer:
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12545892

Hand FIXING Controlnet - MeshGraphormer
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12608930

Fixing hands with the free image editor krita and ai plugin
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12643242


In regards to the photo realism. It's a big topic. This is all about the checkpoint model and how it has been trained.
You can of course enhance it by using lora's and/or Ti's etc.
Use negative prompts like this "(((3D, Render, Animation, painting, cartoon, anime)))" .
Specify what style it is in the positive prompt, example "beauty photography".
Use photography terms for light and shadows, and if you want analoge look you can specify what type of camera the image was shot with and the type of film and the camera settings etc.
Use tags for skin details etc.

Lighting:


For the best details use as high starting resolution you can get away with, without creating twins or monsters.
This means under 1024 for SD1.5 (640x960) and for SDXL a resolution that is equal to 2048 such as 896x1152.
SDXL Image resolutions.png
(thanks to the eminent Synalon for providing this list).

A few checkpoint models I would recommend trying out:

- absolutereality


- aZovyaPhotoreal


- devlishphotorealism


- icbinpICantBelieveIts


- photon


- realisticVision


There is an awesome ebook by Prompt Geek about creating photo realistic images that I have posted here before.
His video about it:


A few links if you wish to support him though it's free:
-
-

Or use my link below:
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Playing around with automatically masking "parts" for replacement/fixing, ended up completely screwing up my workflow but at least i know it works.

All the water caused issues with upscaling and detailing in the first image. Success improved drastically with the second image considering the was far less chance to screw things over. Still had the base generated image for that so can be used to see changes.
And yes, both eyes are done separate and mean to be different, one is purplish other is a cat eye (seed differences etc)
coloring_0001.jpg
coloring_0002.jpg

_00195_.jpg

You don't have permission to view the spoiler content. Log in or register now.

(edited sidenote, when you're doing upscaling in stages...don't fuck up the math and/or forget one of the stages and end up with 16k * 16k images...all the tiles is horribly slow, specially if your graphic card suck...)
 

Thalies

New Member
Sep 24, 2017
13
50
Hi everyone,

I'm new to the world of AI art generation and have recently started exploring Stable Diffusion. I'm particularly interested in using Fooocus.

Could anyone recommend comprehensive tutorials or guides that are particularly well-suited for beginners? I’m looking for resources that can help me understand the nuances of generating high-quality images and utilizing Fooocus to its full potential.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Hi everyone,

I'm new to the world of AI art generation and have recently started exploring Stable Diffusion. I'm particularly interested in using Fooocus.

Could anyone recommend comprehensive tutorials or guides that are particularly well-suited for beginners? I’m looking for resources that can help me understand the nuances of generating high-quality images and utilizing Fooocus to its full potential.
On page one you can find links to helpful posts.
I think this post I did today is relevant to you.
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12775145

or just scroll up a bit... ;)
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Thank you for sharing the PDF in your recent post; I find it incredibly helpful and have started to study it.
I saw that you are interested in focus. I'm not sure in what context or terms you meant, but I have a few tips.
You can use "focus" for the composition example "ass focus". This will generate images with the framing being centered on the bottom midsection from behind or slightly from the side. You can use " hips focus", this will do the same but more from the front. That's one way of using focus. Then you can say "sharp focus" or "soft focus", "front focus" (speculating a little), "background focus" etc. When using simply "sharp focus", it means the lens focus on the subject, mainly the face but also the body. I have not experimented so much with a soft focus or an artistic selective focus on the background and have the subject partially unfocused. But it's something to try. I typically use "focus" to reinforce the composition I have in mind and use it in combination with either "cowboy shot", "half body shot" or "full body shot". Then the "body part focus" will guide the camera in on the part you wish to be in the center focus. This can give more interesting images. If you combine it with a style of photography such as "action photography", "lifestyle" or "documentary" etc. This will also have an effect on the composition.
I use "beauty photography" often and when I want a more analoge or grainy image I simply use "large format beauty photography" as an example. In that case "large format" is what gives the filmgrain.
You can use different variants of "depth of field" to enhance or reinforce the focus you are after. I use mostly regular "depth of field" but sometimes "shallow depth of field". I like how it makes the subject stand out more from the backdrop. It can sometimes almost create a 3d effect.
 
Last edited:

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
I saw that you are interested in focus. I'm not sure in what context or terms you meant, but I have a few tips.
You can use "focus" for the composition example "ass focus". This will generate images with the framing being centered on the bottom midsection from behind or slightly from the side. You can use " hips focus", this will do the same but more from the front. That's one way of using focus. Then you can say "sharp focus" or "soft focus", "front focus" (speculating a little), "background focus" etc. When using simply "sharp focus", it means the lens focus on the subject, mainly the face but also the body. I have not experimented so much with a soft focus or an artistic selective focus on the background and have the subject partially unfocused. But it's something to try. I typically use "focus" to reinforce the composition I have in mind and use it in combination with either "cowboy shot", "half body shot" or "full body shot". Then the "body part focus" will guide the camera in on the part you wish to be in the center focus. This can give more interesting images. If you combine it with a style of photography such as "action photography", "lifestyle" or "documentary" etc. This will also have an effect on the composition.
I use "beauty photography" often and when I want a more analoge or grainy image I simply use "large format beauty photography" as an example. In that case "large format" is what gives the filmgrain.
You can use different variants of "depth of field" to enhance or reinforce the focus you are after. I use mostly regular "depth of field" but sometimes "shallow depth of field". I like how it makes the subject stand out more from the backdrop. It can sometimes almost create a 3d effect.
I can also recommend "bokeh" which is a photography term for a blurred background, especially one that converts points of light into vague blobs e.g. 00224-3716817191.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
I can also recommend "bokeh" which is a photography term for a blurred background, especially one that converts points of light into vague blobs e.g. View attachment 3292306
Yes. I forgot to mention this. :giggle: I often use it as well as depth of field. Something to try is different types of bokeh, example "hasselblad bokeh", "soft bokeh" or "light bokeh". Of course then use weight for modulation.

Really nice and different outfit, and you got really nice hands on her also. Oh and the girl is very cute. :love: