[Stable Diffusion] Prompt Sharing and Learning Thread

Jimwalrus

Active Member
Sep 15, 2021
931
3,427
BTW, my SD is borked. The Locon extension is apparently deprecated and I updated to SDWebui 1.3.0, leaving me with a message of "TypeError: stat: path should be string, bytes, os.PathLike or integer, not LoraOnDisk" every time I try to load a LoRA. They have no effect on the image now.

It seems that the Lycoris extension has taken over as it can handle LoRAs better than the Locon extension.
Unfortunately I can't get it to work properly and I'm looking at a clean reinstall...
May be out of action for a day or two.
:cry:
 
  • Sad
Reactions: Mr-Fox and Sepheyer

Jimwalrus

Active Member
Sep 15, 2021
931
3,427
BTW, my SD is borked. The Locon extension is apparently deprecated and I updated to SDWebui 1.3.0, leaving me with a message of "TypeError: stat: path should be string, bytes, os.PathLike or integer, not LoraOnDisk" every time I try to load a LoRA. They have no effect on the image now.

It seems that the Lycoris extension has taken over as it can handle LoRAs better than the Locon extension.
Unfortunately I can't get it to work properly and I'm looking at a clean reinstall...
May be out of action for a day or two.
:cry:
I've tried all the fixes on Git & Reddit, so it does seem like a 'Nuke It From Orbit' approach is the only one left to me.

Fortunately I've enough disk space to store all the Checkpoint / LoRA etc files so I can drop them back in once done.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
I've tried all the fixes on Git & Reddit, so it does seem like a 'Nuke It From Orbit' approach is the only one left to me.

Fortunately I've enough disk space to store all the Checkpoint / LoRA etc files so I can drop them back in once done.
Can't you just do a git clone? Also if it's an update that is causing the problems, you can revert back to an earlier version that worked for you or that you liked better.

 
  • Thinking Face
Reactions: Jimwalrus

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
This is what i got with these settings. Background and hair changed.
View attachment 2666959 View attachment 2666962

Denoising strenght 0.2
View attachment 2666978
Really hot stuff. :love: NMKD Superscale is much better than lanczos, so don't switch. The different sampling methods gives also different results so try other's as well, I recommend to try DPM++ 2M Karras and DPM++SDE Karras. I always use postprocessing also, you can find it in settings tab/postprocessing. Select both so they show up in the txt2img tab. I recomend GFPGAN. In settings/Face restoration you can select the Face restoration model. I like GFPGAN, yes same name.Try both.
Codeform has a weight slider but it's inverted for some reason 0 being max and 1 minimum. CFG Scale is something to tweak if changing the prompt doesn't give the desired result or is not followed enough. CFG (Classifier Free Guidance scale) is the setting that controls how closely Stable Diffusion should follow the prompt. Denoising Strength I feel is misunderstood by many. It's the setting that is selecting how much you allow the image to change. In text2img it's used only with hiresfix.
I view it as the amount of pixels you give SD to change the image, meaning if you select it too low not much will change from sampling steps to hires steps and the quality will suffer, if the strength is too high you will lose more of the composition.
I recommend ap 0.3, you can go a bit lower or a bit higher. Adjust a tenth at the time. 0.1 is way low imo, even 0.2 might be too low. I would recommend 0.24-0.34. I have not seen that it is related to the genre as Jim said. I could of course be wrong but the denoising scale strength doesn't care what kind of image you are generating, only how large the budget of pixels you allow to be changed.
 

devilkkw

Member
Mar 17, 2021
308
1,053
If you got 12gb vram you should have been able to generate much larger images than that before. I could create images adding up to 2k (width+height) on 6gb, can get to almost 1,5k on just 2gb.
Something must have kept you from using all the vram before
i'm on 6gb, but new driver seem use dynamic memory and i reach 12Gb of pytorch reserved memory.
For going upper 1152x896 i have to sue , i reach 2048x2048 without problem.
With new driver i don't need it, seem Nvidia working for these, but for me 1 min to get an image at same resolution (1152x896) is too high, so i switched back driver and now i get on 11 sec.

It doesn't sound as an issue but rather a positive if you can reach higher resolution..:D You are talking about seconds while some people sitting here and having to wait 30 minutes...:oops: What you say is "pushing into view" is probably only the hires steps finishing. If you can reach a higher resolution it will of course take longer and it will require more vram.
If everything is working and you get nice images I would just let it be. It wasn't clear to me what you perceive as an issue.
Just my opinion.;)
I've done many test before switching back drivers. the problem comes when you go upper 1000px.
Don't use any Hi.res step, only standard generation. maybe i'm happy to see next driver power, because reach high resolution on 6gb is really great. So i hope next update work's better.
 
  • Like
Reactions: Mr-Fox and Sepheyer

Halmes

Newbie
May 22, 2017
18
6
A lot of it depends on the model you use. Here's how the exact same prompt and seed looks on a variety of my models
View attachment 2665042

As you can see models seem to interpret it very different, I have no idea why Stylejourney want's to give her the boobs of a small giantess.

Changing it to 512x768 has the following outcome

View attachment 2665046

And swapping them to 768x512 changes focus to close in

View attachment 2665050

Again, impressive from Stylejourney showing us the Triple Breasted Whore of Eroticon 7 (showing my age here)

And finally if I go to my normal for Landscape, which is 960x540 I start getting the issues with multiple people. You would then need to start going in to the prompt and changing things - for instance try solo:1.2 or 1girl or both

View attachment 2665060
care to share those models?
 

me3

Member
Dec 31, 2016
316
708
i'm on 6gb, but new driver seem use dynamic memory and i reach 12Gb of pytorch reserved memory.
For going upper 1152x896 i have to sue , i reach 2048x2048 without problem.
With new driver i don't need it, seem Nvidia working for these, but for me 1 min to get an image at same resolution (1152x896) is too high, so i switched back driver and now i get on 11 sec.


I've done many test before switching back drivers. the problem comes when you go upper 1000px.
Don't use any Hi.res step, only standard generation. maybe i'm happy to see next driver power, because reach high resolution on 6gb is really great. So i hope next update work's better.
With tiled diffusion you can go far beyond 2048 as it pretty much takes your vram onlyl into account for tile size. Haven't bother more than around 6k as it was starting to take quite a long time and i'm sure there is some kind of upper limit but it should be well within any actual need.

If it's taking advantage of ram to make up the lack of vram i guess it solves OOM issues but the loading and offloading from one to the other would probably be causing slowdowns. IF you're generating image at >1000px without highres you're already bordering badly on issues considering the basis used is either 512 with anything sd1 and at best 768 for sd2 which isn't that much used.
 
  • Like
Reactions: Mr-Fox and Sepheyer

me3

Member
Dec 31, 2016
316
708
care to share those models?
all of them are on civitai, just search and they should show up








 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
With tiled diffusion you can go far beyond 2048 as it pretty much takes your vram onlyl into account for tile size. Haven't bother more than around 6k as it was starting to take quite a long time and i'm sure there is some kind of upper limit but it should be well within any actual need.

If it's taking advantage of ram to make up the lack of vram i guess it solves OOM issues but the loading and offloading from one to the other would probably be causing slowdowns. IF you're generating image at >1000px without highres you're already bordering badly on issues considering the basis used is either 512 with anything sd1 and at best 768 for sd2 which isn't that much used.
A small note, I used 768 for training my lora with kohya ss. 768 is used for SD1.5 also, though I haven't trained a full checkpoint so not completely sure on it.
 

devilkkw

Member
Mar 17, 2021
308
1,053
With tiled diffusion you can go far beyond 2048 as it pretty much takes your vram onlyl into account for tile size. Haven't bother more than around 6k as it was starting to take quite a long time and i'm sure there is some kind of upper limit but it should be well within any actual need.

If it's taking advantage of ram to make up the lack of vram i guess it solves OOM issues but the loading and offloading from one to the other would probably be causing slowdowns. IF you're generating image at >1000px without highres you're already bordering badly on issues considering the basis used is either 512 with anything sd1 and at best 768 for sd2 which isn't that much used.
Yes, i know, also sd1.5 model default is trained in 512x512 image, and sd 2.* on 768x768, but this is image for train, i start getting bad result on sd1.5 going upper to 1980x1080, but it seem related to aspect ratio, and for bad result i mean if you work on subject getting double. but if i work on landscape or interior, the result is good. Every without hi.res fix. And a lot of merging of my model with block weighted.
I think no one use the standard sd1.5 for generation actually.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
A small note, I used 768 for training my lora with kohya ss. 768 is used for SD1.5 also, though I haven't trained a full checkpoint so not completely sure on it.
you can train on it yes, (not sure if there's something going on in the background that would force it to downscale though) but the core is still just 512 meaning anything you didn't specifically train will still be that. Same with every other checkpoint/model using the 1.5 as it's base, which there's a lot of.
so in theory the AI would know a "768 person" and "512 clothing, background" etc, i'm sure there's some logic dealing with it but offhand it does look like a potential for some "oddities"
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
I've tried all the fixes on Git & Reddit, so it does seem like a 'Nuke It From Orbit' approach is the only one left to me.

Fortunately I've enough disk space to store all the Checkpoint / LoRA etc files so I can drop them back in once done.
there's a 1.3.1 and it has some "fixes" relating to loras, no idea if they will have any impact for you, but if it's already broken you can't really make that much worse right :p
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Yes, i know, also sd1.5 model default is trained in 512x512 image, and sd 2.* on 768x768, but this is image for train, i start getting bad result on sd1.5 going upper to 1980x1080, but it seem related to aspect ratio, and for bad result i mean if you work on subject getting double. but if i work on landscape or interior, the result is good. Every without hi.res fix. And a lot of merging of my model with block weighted.
I think no one use the standard sd1.5 for generation actually.
To avoid multiple girls, try (1girl:1.5) and (more than one girl:1.5) (multiple girls:1.5) in negative. And to avoid getting only lying down poses in landscape format image use (standing:1.5) etc.
 
  • Like
Reactions: devilkkw

Jimwalrus

Active Member
Sep 15, 2021
931
3,427
there's a 1.3.1 and it has some "fixes" relating to loras, no idea if they will have any impact for you, but if it's already broken you can't really make that much worse right :p
Did a git pull today and the latest version is already at 1.3.2
The Lycoris tab is separated out from the LoRA tab, and LoRAs work now.
Looks like all I need to do is work out which of the 400+ models in the LoRA folder are actually Lycoris and move them.
Still waaayy preferable to a clean reinstall!
 
  • Yay, new update!
  • Like
Reactions: Mr-Fox and Sepheyer

Jimwalrus

Active Member
Sep 15, 2021
931
3,427
Did a git pull today and the latest version is already at 1.3.2
The Lycoris tab is separated out from the LoRA tab, and LoRAs work now.
Looks like all I need to do is work out which of the 400+ models in the LoRA folder are actually Lycoris and move them.
Still waaayy preferable to a clean reinstall!
Thank fuck that's done, Clone Airways is ready to take to the ((((friendly)))) skies once more...
00001-1421511863.png