[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I prompted a picture using the standard stable diffusion checkpoint, then I changed to RealisticVision to get it to look more lifelike. At that point it freaks out, and changes it into an eldritch horror instead.
It can happen sometimes when you switch over to a different checkpoint and then very fast generates a new image.
What happens when you generate the next one? Is it still a monster? If so , then you need to adjust the prompt slightly to work better with this other checkpoint. They all react slightly different to the prompt. Some are very sensitive and you need to weight the tags carefully, other's are the opposite and you have to go heavy.
One thing that is fast and that can do wonders is to use clip skip, in settings/Stable Diffusion or add the quick slider.
You can read about it in this post.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
In relation to the two images below, which I posted in https://f95zone.to/threads/ai-art-show-us-your-ai-skill.138575/post-10343577 , how can I 'inpaint' or any other technique to correct the leg/feet and the missing arm. I can use inpaint without any issues to modify eyes, faces, hands and background but can't, for whatever reason, do the same for these. Any hints appreciated. I'm sure i'm missing something obvious.
View attachment 2483049 View attachment 2483050
You can either do as seph suggested, this is what I usually do. If you insist on trying to fix it you have a few options.
-Inpaint
-ControlNet
-Photoshop
I posted about this very topic a while back, to fix with inpaint.
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-10146269 (Inpaint)
(Youtube tutorial to fix hands with ControlNet)
Photoshop:
I would generate a batch of images and then cut & paste the missing or broken parts with photohop and try to blend it together to look seamless with layer masks and adjustment layers.
Depending on your photoshop skills the result could look great or like Frankensteins monster...
 

bobsen

Newbie
Jul 21, 2017
86
97
It can happen sometimes when you switch over to a different checkpoint and then very fast generates a new image.
What happens when you generate the next one? Is it still a monster? If so , then you need to adjust the prompt slightly to work better with this other checkpoint. They all react slightly different to the prompt. Some are very sensitive and you need to weight the tags carefully, other's are the opposite and you have to go heavy.
One thing that is fast and that can do wonders is to use clip skip, in settings/Stable Diffusion or add the quick slider.
You can read about it in this post.
I'll try that, it's just strange. I had no problems before, and now this CLEARLY human-shaped form gets turned into some sort of flesh spawn with 18 nipples, it's quite disturbing.
 
  • Haha
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Is there any 3d for model comparison? i usually check for fast comparison, but many model missed.

Thank you again for patience and for giving good suggestion to improve my skill.
No problem, this is what this thread is for. To collaborate and share what we've learned with each other.
I have posted a few plot script tables, comparing some checkpoints, I don't know how useful they are though.
Comparison Part 1
Comparison Part 2
Comparison Part 3
Comparison Part 4
 

devilkkw

Member
Mar 17, 2021
323
1,093
impressive great work. good comparison.
After some try, i found setting for hires.fix tath work great for me: i use 2xGeneration samples at 0.4 denoise. it give difference and keep image consistent, and also fix little error.

I've a question about "inpaint" model. is it automatically used when you do inpaint?
i mean i have MODEL-A.safetensor and MODEL-A.inpaint.safetensor, but don't know how it work.
i used inpaint at the start of a1111 with sd1.5 standard model.
 
  • Yay, new update!
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I'll try that, it's just strange. I had no problems before, and now this CLEARLY human-shaped form gets turned into some sort of flesh spawn with 18 nipples, it's quite disturbing.
If this was your first then you have been very lucky. Just you wait... The freakshow hasn't even begun.. :ROFLMAO:

You don't have permission to view the spoiler content. Log in or register now.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
impressive great work. good comparison.
After some try, i found setting for hires.fix tath work great for me: i use 2xGeneration samples at 0.4 denoise. it give difference and keep image consistent, and also fix little error.

I've a question about "inpaint" model. is it automatically used when you do inpaint?
i mean i have MODEL-A.safetensor and MODEL-A.inpaint.safetensor, but don't know how it work.
i used inpaint at the start of a1111 with sd1.5 standard model.
Inpaint is a tool that is connected to the img2img tab. It's not a checkpoint or "model". That was only in the earlier days before it was integrated.
(img2img and inpaint basics)
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Do people turn up GFPGAN and CodeFormer visibility? Does restore faces still work without it?
It depends. If this is in the Face Restoration settings and you turn the visibility to 0 for codeformer and then select codeformer and save, then go to txt2img and check the box for Restore Faces. If you did all this and I have no idea why anyone would, then no it would not work of course. If you have 2 sliders in txt2img tab for visibility of GFPGAN and for Codeformer. These are postprocessing and seperate from Restore Faces.
 
  • Like
Reactions: Sepheyer

bobsen

Newbie
Jul 21, 2017
86
97
If this was your first then you have been very lucky. Just you wait... The freakshow hasn't even begun.. :ROFLMAO:

You don't have permission to view the spoiler content. Log in or register now.
That stuff is lightweight compared to what I had to see today ! xD I just wondered if it was cause of the different checkpoint (same-checkpoint works fine usually), but also why the negative prompt didn't work (bad anatomy, too many nipples), etc. Thanks still.
 
  • Hey there
Reactions: Mr-Fox

fr34ky

Active Member
Oct 29, 2017
812
2,189
I prompted a picture using the standard stable diffusion checkpoint, then I changed to RealisticVision to get it to look more lifelike. At that point it freaks out, and changes it into an eldritch horror instead.
Been there bro, when you are just starting you get a lot of monsters that could only be created on AI nightmares, you eventually get more desensitized for all that uglyness, always helps having the 'ugly, bad anatomy, bad art' on the negative prompt box.

I know this isn't solving your issue, but it will keep your heart on it's place.
 
  • Like
Reactions: Sepheyer and Mr-Fox

devilkkw

Member
Mar 17, 2021
323
1,093
Inpaint is a tool that is connected to the img2img tab. It's not a checkpoint or "model". That was only in the earlier days before it was integrated.
(img2img and inpaint basics)
good tutorial. i need to inspect more, i've inpaint model beside some checkpoint, if i try to load it, get the error. don't know much about it.
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
good tutorial. i need to inspect more, i've inpaint model beside some checkpoint, if i try to load it, get the error. don't know much about it.
You can use any model. Generate an image and send it to inpaint to fix things on it. You will continue to use the same model the entire process. Make sure that you update your Auto1111 webui by writing "cmd" in the root directory of Stable diffusion.
Then wright "git pull" and press enter. If you don't have inpaint tab it was a very long time ago you updated.
 
  • Like
Reactions: devilkkw

fr34ky

Active Member
Oct 29, 2017
812
2,189
You can use any model. Generate an image and send it to inpaint to fix things on it. You will continue to use the same model the entire process. Make sure that you update your Auto1111 webui by writing "cmd" in the root directory of Stable diffusion.
Then wright "git pull" and press enter. If you don't have inpaint tab it was a very long time ago you updated.
Please note that git pull only works to update automatic1111 if you cloned your repository to install automatic1111 in the first time.

If you downloaded the zip and pasted the files on your windows folder git pull won't work, this had me confused a long time because my automatic1111 was not auto updating and I had git pull on the file.

edit: writing 'git pull' on cmd doesn't work either if you didn't clone the repository to install automatic1111 the first time, I think the only way is to have a fresh installation, may be initializing the repository and then start making git pulls could work, but I wouldn't risk doing that on my current installation.

PS: this is only a technical git comment, ignore this if you are struggling to make your stable diffusion work for other reasons.
 
Last edited:

devilkkw

Member
Mar 17, 2021
323
1,093
I have all up to date, i have all tabs, what i don't know is literally that ".inpaint.safetensor" and how use it.

Those Nipples:ROFLMAO:
You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.
 
  • Red Heart
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I have all up to date, i have all tabs, what i don't know is literally that ".inpaint.safetensor" and how use it.

Those Nipples:ROFLMAO:
You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.
OK. Well watch the by . He goes through the basics of inpaint later in the video. He also have more videos about inpaint. A different youtuber that posts many tutorial videos is , he have also videos about inpainting I'm sure.
 

miaouxtoo

Newbie
Mar 11, 2023
46
132
I'd read that inpainting doesn't always work out well because you ideally need an inpaint checkpoint model?
I know there's a sd-1.5 inpaint checkpoint for this kind of purpose on the default 1.5 model.

I think there are ways (I saw a thread on reddit) where you can use the 'combine checkpoint' tab/function to correctly generate an inpainting model from sd-1.5 inpaint + specific model = specificmodelinpaint checkpoint.

Can anyone verify this is needed?
 
  • Thinking Face
Reactions: Mr-Fox

miaouxtoo

Newbie
Mar 11, 2023
46
132
Mr-Fox

With your request on the AI posting thread. (sorry for the spam)
I think automatic1111 now comes with Controlnet as default.

For all new beginners, it allows you to give SD a better idea of what pose and structure you want in your generation. Basically an addition/reinforcement to the txt2img text prompt when you use it. You just drop in an image (hopefully the right aspect ratio) into the controlnet section of your txt2img page, then adjust some of the sliders for rigidity/freedom & what percentage you want controlnet to act on the sampling.

It's fun to use on actual porn, manga, real life photos etc to see what you get. I can imagine a day when all old anime (Ghibli!) will be one-click rebuilt with SD quality. Or old movies resampled entirely. Or face/deepfakes etc. It's amazing.

You'll still need to download the standard controlnet models on top of the preprocessor, the files go into:
stable-diffusion-webui -> extensions -> sd-webui-controlnet -> models ->

Screenshot 2023-03-21 at 09.40.50.png
Screenshot 2023-03-21 at 09.42.45.png

Then you can go from the b/w panel to something like the elves:

00029-1283317002.png 00025-1283317000.png

And you can use different models ofc on the same controlnet.

Or different variations (I used a photo to generate these ones):

00140-3214936787.png 00200-3214936786.png 00273-1486197846.png 00281-838351735.png

Enjoy!
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I'd read that inpainting doesn't always work out well because you ideally need an inpaint checkpoint model?
I know there's a sd-1.5 inpaint checkpoint for this kind of purpose on the default 1.5 model.

I think there are ways (I saw a thread on reddit) where you can use the 'combine checkpoint' tab/function to correctly generate an inpainting model from sd-1.5 inpaint + specific model = specificmodelinpaint checkpoint.

Can anyone verify this is needed?
There are indeed inpaint specific checkpoints and even lora's.

I saw it in the begining of my Ai exploration but I assumed later that it wasn't necessary because we got the inpaint tab.
You might be correct. It's worth some testing.
 

miaouxtoo

Newbie
Mar 11, 2023
46
132
There are indeed inpaint specific checkpoints and even lora's.
This is the inpaint checkpoint model for anything-v3, presumably it's good at filling in those anime faces where the sd-1.5 model would want to fill in a real person.



And here's the for how to inpaint any model.
 
  • Like
Reactions: devilkkw and Mr-Fox

devilkkw

Member
Mar 17, 2021
323
1,093
All useful post, now i've understand. Also made my custum impaint model.
Most important is when you merge, don't make any yaml file, because if you make it, model is not loaded correctly and get error about tensor size.

Also, after done process linked by miaouxtoo, i had better result doing another step with new inpainting model:
do a weightsum at 1 with model you used for obtaining inpainted one.