- Sep 24, 2019
- 1,143
- 1,954
I'd like to have a more flexible way of creating that's why I am asking. With MJ I can just imagine whatever I want, specify it with 40 prompts and get very precise results, from anime to photo realistic in no time. Now here with that downloaded checkpoint, all generations look pretty close to this original girl.Don't worry about merging checkpoints yet, before you get really familiar and proficient with SD. Instead download others merged checkpoints on civitai for now.
You need this vae also. Forgot to tell you, sorry.So the checkpoints are basically addons, and the loras are something like "prompt collections"?
I think your reply did overlap with my edit on my post above yours. It tried it first with only carrying the prompts over from civitai, but did get different results. Now with the data I extracted from your PNG the result is way closer (and takes way longer though).
How did you know what to add? I see you had the upscaler, denoising, refiner and all of that in your settings. Or, to put my question different: How would I gather these information from the original model page on civitai? Just the same way, downloading the picture and extracting the settings?
Funny is, in my own version (as above) when I added "nipples" they didn't show. Now with the extracted setting from the original it does. Any idea why?
I am asking because I of course want to understand the mechanisms behind it so that I can create more freely and not just copy and paste prompts all the time. So I have to understand for instance what changed with the extracted settings that made SD listen more closely to my input.
Aaaaand a last question: I can basically turn of the upscaler and only upscale pictures I like later on the extra page in order to safe time?
As I am typing this, the generation with the extracted settings from your PNG is done. It's quite close now, but the colors are still faded!
View attachment 2993328
Anime checkpoints are very specific to anime style. There are others that can do a little of everything. Such as Experience, Never ending dream, Dreamshaper, there are many. Yes you need to switch checkpoints depending on the style and the result you want. To get the pubic hair you can add weight to the this token (more than one word describing the same thing) like this:I'd like to have a more flexible way of creating that's why I am asking. With MJ I can just imagine whatever I want, specify it with 40 prompts and get very precise results, from anime to photo realistic in no time. Now here with that downloaded checkpoint, all generations look pretty close to this original girl.
If I would now type in something like "Two women with blue hair, wearing fantasy armor, on the moon, mars and space in the background, rough cyberpunk style, serious setting" I get that result with the style of the current setup:
View attachment 2993499
It looks pretty cool but whatever I create is done in this particular style. If I switch however back to the stable diffusion default checkpoint, I'll get this:
View attachment 2993512
So apparently the basic SD model is, well.. pretty inferior The checkpoints however are very specific. Is this just how SD works, that we always have to switch between different checkpoints for different purposes?
MJ does A LOT of stuff for you in the background, things you now need to do yourself in one way or another.I'd like to have a more flexible way of creating that's why I am asking. With MJ I can just imagine whatever I want, specify it with 40 prompts and get very precise results, from anime to photo realistic in no time. Now here with that downloaded checkpoint, all generations look pretty close to this original girl.
If I would now type in something like "Two women with blue hair, wearing fantasy armor, on the moon, mars and space in the background, rough cyberpunk style, serious setting" I get that result with the style of the current setup:
View attachment 2993499
It looks pretty cool but whatever I create is done in this particular style. If I switch however back to the stable diffusion default checkpoint, I'll get this:
View attachment 2993512
So apparently the basic SD model is, well.. pretty inferior The checkpoints however are very specific. Is this just how SD works, that we always have to switch between different checkpoints for different purposes?
Ahh good to know the weight feature exists here too! What's the difference between using braces and using no braces (except that 0.1 value?).pubic hair:1.2 or with brackets like this (pubic hair:1.2). Remember that brackets ads 0.1 on it's own so (pubic hair:1.2) is actually = 1.3 . A reason for it not showing could be because it's under a layer of clothing and SD doesn't understand you want the hair to be visible through the clothes. In that case you need to specify that the clothes are "sheer," or "see through" or "transparant".
Yes I realize this as we speakMJ wins on image "quality", simplicity in prompting (just throw words and it'll sort it out for you), but assuming you pick "the right tools for the job" SD will be more flexible, give you better control and not run off with all your money. With SDXL the quality gap is going away too.
So I pick "none" at upscaler right now? Or is one basic upscaler important for the base quality?Yes you can choose to not upscale the image now, you can use SD Upscale script in img2img instead.
SDXL is the "next evolution" for SD, it's base image dimension goes up to 1024x1024, which i believe is still what MJ is at as well.Yes I realize this as we speak
What is SDXL?
And little side question: Are there already easy ways to add a bit of motion to still images with SD?
So when do I use SD Upscale? You said you like a picture, re-use the seed and then remake it with Hiresfix. But then there is no need for SD upscale anymore, or do you add that on top after that too?Hiresfix is part of the generative process just like SD Upscale script, wich is an extension btw that you need to install. Go to the extension tab and available and press "load from". Then find SD Upscale and press "install", then press "apply and restart" in installed tab. Now you can find it in the scripts menu in img2img tab.
Does that matter though if we can upscale as much as we want and if that upscaling is anyways a part of the generative process? What's the difference?SDXL is the "next evolution" for SD, it's base image dimension goes up to 1024x1024, which i believe is still what MJ is at as well.
It's a "amount of detail" thing. Base image size is what was used to train on, so when it's 512x512 you can tell the person has skin and that it's a certain "color", etc, and yes you can upscale that to 2k or 4k it won't look stretched etc like with a texture but you'd still be limited to the "detail" from the original size.Does that matter though if we can upscale as much as we want and if that upscaling is anyways a part of the generative process? What's the difference?
That would be true for simple resizing (stretching) of the source material, but the upscalers do generative upscaling which means they add new details and pixels as they do, don't they?I guess you can think of it as 2 squares of the same size, but one of them fits 512 pixels in each direction, the other fits 1024, one will have much smother and more details than the other.
If you want a RL example i guess printers work, you're limited to the same A4 sheet of paper, but the smaller each "dot" is the nicer and crisper the text or image you print gets.
You can't add what you don't know about. It's not in the output this comes into play it's in the training. the images you train on have finer details.That would be true for simple resizing (stretching) of the source material, but the upscalers do generative upscaling which means they add new details and pixels as they do, don't they?
Exactly - I'd recommend using HiResFix as part of the generative process, even if you only want a small image, rather than trying to just upscale later.That would be true for simple resizing (stretching) of the source material, but the upscalers do generative upscaling which means they add new details and pixels as they do, don't they?
My computer did shut off two times yesterday after using SD for a longer time, it both times happened while the "ultra sharp 4x" upscaling IIRC. I checked my GPU temperature meanwhile and it was fine (65°C), CPU usage was very low - it's really weird. I had to wait a couple of minutes both times until it was able to reboot, for the first minutes it was completely shut, as if it overheated.Then set HiResFix upscale level to whatever your GPU's vRAM can handle, select the upscaler (recommend you start with ESRGANx4 or similar as a 1st go, experiment with others later), set denoising strength to between 0.2 and 0.4, HiResFix steps to at least 150% of the generative steps and select Generate!
The upscaling is the most demanding part - if you look at the CMD window that's running in the background it will tell you what phase you're at and how long each iteration is taking. Gen steps normally show as being iterations per second, upscaling is usually seconds per iteration!My computer did shut off two times yesterday after using SD for a longer time, it both times happened while the "ultra sharp 4x" upscaling IIRC. I checked my GPU temperature meanwhile and it was fine (65°C), CPU usage was very low - it's really weird. I had to wait a couple of minutes both times until it was able to reboot, for the first minutes it was completely shut, as if it overheated.
Before you grow too much into Automatic1111, do try ComfyUI - do what's called "portable install" by clicking "direct link to download":My computer did shut off two times yesterday after using SD for a longer time, it both times happened while the "ultra sharp 4x" upscaling IIRC. I checked my GPU temperature meanwhile and it was fine (65°C), CPU usage was very low - it's really weird. I had to wait a couple of minutes both times until it was able to reboot, for the first minutes it was completely shut, as if it overheated.