- Dec 23, 2018
- 598
- 2,509
I don't think that's cashmere but if I can just borrow it for a second or two I'll check for you
there is a slight chance i'd suggest something not quite that low cutWould anyone else feel embarrassed introducing her to your mom?
You need to post the png file. You can find it in stable-diffusion-webui\outputs\txt2img-images, then the date it was generated.Tell me, please, why is it so? I write in prompts "one person" or "1 person", but in the end, a lot of people still appear in the picture.
View attachment 2753073 View attachment 2753074
Yeah you can use something like Regional Prompter (there's a really good overview of it on the github page)Apologize if this is a repeat question, but is there a way to specify Girl A is Lora:X and Girl B is Lora:Y?
I'm trying to work backwards from this example (You must be registered to see the links):
View attachment 2755371
Is there a way to base the rear girl off one Lora and the front off another, or does it just blend whatever models you add to the prompts?
Yeah, I completely miss read the comment lol. I thought they were asking how to use LyCORIS not make a LyCORIS lolYou need the extension to use LyCORIS. Go hereYou must be registered to see the linksclick "code" and copy the link then go to the extensions tab on automatic1111 and click "Install from URL" paste the link under " URL for extension's git repository " and click apply and wait for it to install then click reload UI. All LyCORIS models go into your LoRA folder.
Don't worry about it, you gave good info. It will without a doubt help someone.Yeah, I completely miss read the comment lol. I thought they were asking how to use LyCORIS not make a LyCORIS lol
Nice! I am struggling to see the purpose for the SDXL. I watched a few videos, but still am in WTF mode. Hear me out. I think what the SDXL actually under the hood is is an upscaler workflow. But the user gets stuck with a single model. Naturally, if there are other applications, I am merely uninformed.First of all i've never used Comfyui before so probably a lot of basics done horribly wrong, even more than usual.
Second, never used SDXL so no idea how prompting differs.
But it was the only thing i could get the model to even load in without OOM so needs must...
So with the ideal situation of using multiple unknowns i don't really no if the base model is working correctly, the UI setup is even remotely behaving well, nor if the refiner being applied in any way close to what it's meant to.
So here some test images, base and refiner "pairs"...
View attachment 2760431 View attachment 2760432
View attachment 2760455 View attachment 2760456
just a base image to show that there still seem to be an issue with multiple subjects (didn't try to fix it with just prompts) the rest of the image didn't seem too bad thought.
View attachment 2760451
I could not replicate this with hiresfix. Just to be clear. Did you talk about "normal" upscalers? I have an overclocked GTX1070 with 8Gb vram and I'm stuck with 1280x1920. I can crank up the sampling steps and hires steps, it only takes ages but with very low amount of steps I can't get over that resolution without getting cuda memory error.Upscaler Tips
So, I was pondering. A latent with 100 steps is markedly larger, takes more memory than a latent with 20 steps. May be I incorrectly attribute it to memory, but those refinement steps are not free, you keep paying for them even after you ran them and they are sitting inside your latent. When you manipulate the latent that has more steps you keep paying for those extra steps.
I emperically arrived at it when reducing the refinement in an upscale workflow. The first latent had 18 iterations, the upscale latent denoised 0.5 and ran 7 iterations more.
Turns out the workflow executes exponentially faster when the first latent has less refinement steps. Hmmm.
View attachment 2764860
So, naturally, each "refinement step" is probably a big ass vector/matrix that the GPU adds to the previous already large collection of big ass vectors to start with.
Which made me re-try a resolution I never had enough memory for: 1536 x 2304. This time I lowered the steps and it worked.
A 1536 x 2304 image on a 6GB card, 13/6 steps, 17 minutes to render:
View attachment 2764848
The point of the exercise was that I never knew that the extra steps do limit one's ability to upscale an image.
AFAIK, upscaling in img2img doesn't work like hiresfix. Hiresfix is part of the generative process and "creates" new pixels and thus improves the image quality while "normal" upscaling can't "invent" pixels that aren't already there, so it only makes the image larger without the same bump in quality. So in my opinion and many other's, hiresfix is superior. This is why I wanted to try to replicate what seph had discovered but with hiresfix. I'm sticking to A1111 for now, I never got used to the node system in any of the many softwares I have messed with. Blender, 4d wrap etc.Comfy seems to do a few things differently including how it loads models. IE it can load the ~12gb SDXL base model in less than 6gb vram, while a1111 and the SD.next fork can't even load the pruned 7gb version without OOM.
Looking at the operations i'm guessing a way to describe what gets done is that the first steps generate one image, then that image is used in a img2img way and the final steps are applied to it.
So to replicate it in A1111 you'd probably need to pass on the image to that and and apply the "finishing touches". Not really used that so don't know how or if it would work
Yes, this is what a Lora or Textual Inversion is. If you are proficient with SD, this would be the next step. If you search in this thread you can find a lot of information and links about this. I recommend to read the awesome guide by Schlongborn that you can find the link to on the first page if you decide to try it out.Hi, i have a question, if i take several pictures of a real life person, do you think i should be able to create a model of that person ?