To put a stop all the helpfulness...Very good idea. I'll leave that to ronimikael to figure out though as it's his project. I hope that we all have given him the guidance he was looking for. I gotta say I love how awesome and responsive this thread has become. Everyone is being very helpful and has good tip's and ideas.
Good one.. And gorgeous lady. However it's against the thread rules to hide the prompt...To put a stop all the helpfulness...
As i was cleaning out old images, this one popped out View attachment 2665714
(edited for prompt, forgot it was an upscale)You don't have permission to view the spoiler content. Log in or register now.
Her "prompt" starts with "look into my eyes...." and you're beyond caring about the restGood one.. And gorgeous lady. However it's against the thread rules to hide the prompt...
Time taken: 22m 9.91sTorch active/reserved: 1100/1302 MiB, Sys VRAM: 1995/1998 MiB (99.85%)
Yes, most of the time I look more on it rather than the progressbar. Exactly this. Since I have an old weakish card I often interrupt the generation when it's clear SD messed it up.For those wondering how the time is taken up when generating, keep an eye on the cmd window where the Python scripts are actually running - that will let you know how many steps you are through the process, how long each step is taking etc.
Also helps you to make the decision whether or not to skip / interrupt a mediocre image / batch, or just let it finish.
This is what i got with these settings. Background and hair changed.And this is something I seem to keep pointing out - people aren't 1:1 aspect ratio. If you're wanting a standing person, set your initial image size as 512 pixels width and go for a height that's some multiple of 64 pixels above that.
Try 512x960, upscaled by x2 (to 1024x1920), 40 hires steps, denoising of 0.2.
Please post your results, she's very nice!
I'm sorry, there appear to be some words on Ronimikael's post as well, but I can't for the life of me remember anything about them...This is what i got with these settings. Background and hair changed.
View attachment 2666959 View attachment 2666962
Denoising strenght 0.2
View attachment 2666978
I've tried all the fixes on Git & Reddit, so it does seem like a 'Nuke It From Orbit' approach is the only one left to me.BTW, my SD is borked. The Locon extension is apparently deprecated and I updated to SDWebui 1.3.0, leaving me with a message of "TypeError: stat: path should be string, bytes, os.PathLike or integer, not LoraOnDisk" every time I try to load a LoRA. They have no effect on the image now.
It seems that the Lycoris extension has taken over as it can handle LoRAs better than the Locon extension.
Unfortunately I can't get it to work properly and I'm looking at a clean reinstall...
May be out of action for a day or two.
Can't you just do a git clone? Also if it's an update that is causing the problems, you can revert back to an earlier version that worked for you or that you liked better.I've tried all the fixes on Git & Reddit, so it does seem like a 'Nuke It From Orbit' approach is the only one left to me.
Fortunately I've enough disk space to store all the Checkpoint / LoRA etc files so I can drop them back in once done.
Really hot stuff. NMKD Superscale is much better than lanczos, so don't switch. The different sampling methods gives also different results so try other's as well, I recommend to try DPM++ 2M Karras and DPM++SDE Karras. I always use postprocessing also, you can find it in settings tab/postprocessing. Select both so they show up in the txt2img tab. I recomend GFPGAN. In settings/Face restoration you can select the Face restoration model. I like GFPGAN, yes same name.Try both.This is what i got with these settings. Background and hair changed.
View attachment 2666959 View attachment 2666962
Denoising strenght 0.2
View attachment 2666978
i'm on 6gb, but new driver seem use dynamic memory and i reach 12Gb of pytorch reserved memory.If you got 12gb vram you should have been able to generate much larger images than that before. I could create images adding up to 2k (width+height) on 6gb, can get to almost 1,5k on just 2gb.
Something must have kept you from using all the vram before
I've done many test before switching back drivers. the problem comes when you go upper 1000px.It doesn't sound as an issue but rather a positive if you can reach higher resolution.. You are talking about seconds while some people sitting here and having to wait 30 minutes... What you say is "pushing into view" is probably only the hires steps finishing. If you can reach a higher resolution it will of course take longer and it will require more vram.
If everything is working and you get nice images I would just let it be. It wasn't clear to me what you perceive as an issue.
Just my opinion.
care to share those models?A lot of it depends on the model you use. Here's how the exact same prompt and seed looks on a variety of my models
View attachment 2665042
As you can see models seem to interpret it very different, I have no idea why Stylejourney want's to give her the boobs of a small giantess.
Changing it to 512x768 has the following outcome
View attachment 2665046
And swapping them to 768x512 changes focus to close in
View attachment 2665050
Again, impressive from Stylejourney showing us the Triple Breasted Whore of Eroticon 7 (showing my age here)
And finally if I go to my normal for Landscape, which is 960x540 I start getting the issues with multiple people. You would then need to start going in to the prompt and changing things - for instance try solo:1.2 or 1girl or both
View attachment 2665060
With tiled diffusion you can go far beyond 2048 as it pretty much takes your vram onlyl into account for tile size. Haven't bother more than around 6k as it was starting to take quite a long time and i'm sure there is some kind of upper limit but it should be well within any actual need.i'm on 6gb, but new driver seem use dynamic memory and i reach 12Gb of pytorch reserved memory.
For going upper 1152x896 i have to sueYou must be registered to see the links, i reach 2048x2048 without problem.
With new driver i don't need it, seem Nvidia working for these, but for me 1 min to get an image at same resolution (1152x896) is too high, so i switched back driver and now i get on 11 sec.
I've done many test before switching back drivers. the problem comes when you go upper 1000px.
Don't use any Hi.res step, only standard generation. maybe i'm happy to see next driver power, because reach high resolution on 6gb is really great. So i hope next update work's better.
all of them are on civitai, just search and they should show upcare to share those models?
A small note, I used 768 for training my lora with kohya ss. 768 is used for SD1.5 also, though I haven't trained a full checkpoint so not completely sure on it.With tiled diffusion you can go far beyond 2048 as it pretty much takes your vram onlyl into account for tile size. Haven't bother more than around 6k as it was starting to take quite a long time and i'm sure there is some kind of upper limit but it should be well within any actual need.
If it's taking advantage of ram to make up the lack of vram i guess it solves OOM issues but the loading and offloading from one to the other would probably be causing slowdowns. IF you're generating image at >1000px without highres you're already bordering badly on issues considering the basis used is either 512 with anything sd1 and at best 768 for sd2 which isn't that much used.
Yes, i know, also sd1.5 model default is trained in 512x512 image, and sd 2.* on 768x768, but this is image for train, i start getting bad result on sd1.5 going upper to 1980x1080, but it seem related to aspect ratio, and for bad result i mean if you work on subject getting double. but if i work on landscape or interior, the result is good. Every without hi.res fix. And a lot of merging of my model with block weighted.With tiled diffusion you can go far beyond 2048 as it pretty much takes your vram onlyl into account for tile size. Haven't bother more than around 6k as it was starting to take quite a long time and i'm sure there is some kind of upper limit but it should be well within any actual need.
If it's taking advantage of ram to make up the lack of vram i guess it solves OOM issues but the loading and offloading from one to the other would probably be causing slowdowns. IF you're generating image at >1000px without highres you're already bordering badly on issues considering the basis used is either 512 with anything sd1 and at best 768 for sd2 which isn't that much used.
you can train on it yes, (not sure if there's something going on in the background that would force it to downscale though) but the core is still just 512 meaning anything you didn't specifically train will still be that. Same with every other checkpoint/model using the 1.5 as it's base, which there's a lot of.A small note, I used 768 for training my lora with kohya ss. 768 is used for SD1.5 also, though I haven't trained a full checkpoint so not completely sure on it.