- Jan 30, 2023
- 17
- 8
alright i'll give this a shot. thanks for the info.An alternative is to use ip adapter in controlnet. You would need to use one "unit" for each and then set the control weight and starting control step etc to mix them.
alright i'll give this a shot. thanks for the info.An alternative is to use ip adapter in controlnet. You would need to use one "unit" for each and then set the control weight and starting control step etc to mix them.
Not recently, but it works fairly well.Have you tried latent composition? It sounds interesting. I have seen it here and there at a glance but not tried it.
If you've created TIs for each face, then you can use the Embedding Inspector to combine them at whatever relative strengths you wish (+ a dash of, for instance, Ariana Grande if you like).Looking for help merging two faces to create one, reusable face.
I tried to train dreambooth with about 30 images, 15 of each face, but i don't have enough vram and it crashes. I have a 1660ti on my laptop.
I want to use two faces.... of women I know, and merge them together to create a totally new face.. I don't want to use either of their faces for obvious reasons...
I could maybe try and create a lora but i'm having difficulty.
Any tips on merging two faces, or training two faces.
I currently use Reactor with great results, but want a unique female face.
Thanks. I'm not that advanced yet. I used img2img, and slowly blended them together for a decent resultIf you've created TIs for each face, then you can use the Embedding Inspector to combine them at whatever relative strengths you wish (+ a dash of, for instance, Ariana Grande if you like).
If you've created LoRAs for each of them, I believe Kohya_ss has a LoRA combination tool.
Lora is better way, more controllable than embedding. what problem have you on training lora? if you expose maybe we give better help.Looking for help merging two faces to create one, reusable face.
I tried to train dreambooth with about 30 images, 15 of each face, but i don't have enough vram and it crashes. I have a 1660ti on my laptop.
I want to use two faces.... of women I know, and merge them together to create a totally new face.. I don't want to use either of their faces for obvious reasons...
I could maybe try and create a lora but i'm having difficulty.
Any tips on merging two faces, or training two faces.
I currently use Reactor with great results, but want a unique female face.
Hey thanks for the support. I don't think my video card has enough juice to train a lora.. or i'm not doing it properly..Lora is better way, more controllable than embedding. what problem have you on training lora? if you expose maybe we give better help.
Also, for lora, you have 2 way to chose: train directly mixed, or train 2 model and merge them.
That's some mighty fine Wendigo Erotica. A little tip or just observation, the LCM sampler will not give a good result if you are not using it with a LCM checkpoint. Also, if you use a resolution 1024 wih SD1.5 you are more likely to get conjoined twins. I would recommend to use 960x640 and then use either hiresfix or upscale in img2img with SD Upscale script. I know for a fact that you are already aware. This is only a reminder and for anyone else that might not be aware.Testing all 22 sampler in CUI, same settings, changing only sampler.
As you can see, LCM sampler is really different, all other similar.
You don't have permission to view the spoiler content. Log in or register now.You don't have permission to view the spoiler content. Log in or register now.You don't have permission to view the spoiler content. Log in or register now.Hope this comparison is useful, sorry for not attaching workflow on image, but is all messed up with node i'm experimenting and not good for sharing at the moment.You don't have permission to view the spoiler content. Log in or register now.
You need at minimum 6GB Vram.Hey thanks for the support. I don't think my video card has enough juice to train a lora.. or i'm not doing it properly..
I've watched a few walkthrough videos, but I can't seem to figure it out..
Civitai has anHey thanks for the support. I don't think my video card has enough juice to train a lora.. or i'm not doing it properly..
I've watched a few walkthrough videos, but I can't seem to figure it out..
How you describe the variation resolution sounds similar to how you can use width/height and target width/target height in sdxl text encoder in comfyui. If that's the case it's very useful, since among many things you can "zoom" in/out or generation, and decide what gets "cropped out" and how it fits on your "canvas"Bonus.
(twigs and pine cones included)..
I hope the excellent Devilkkw doesn't mind I keep posting with his prompt, had too much fun to stop.
The other day me and the eminent Synalon experimented with and explored variation seed, variation strength and more importantly variation resolution.
This is what the tip text says about it:
View attachment 3268917
This means that you can generate an image in landscape without getting eldritch monsters by setting the main resolution in landscape ratio and the variation resolution in portrait ratio. See examples below.
View attachment 3268862 View attachment 3268863
View attachment 3268882 View attachment 3268928
When ever I have tried the loras I have had issues. Maybe I did it wrong..How you describe the variation resolution sounds similar to how you can use width/height and target width/target height in sdxl text encoder in comfyui. If that's the case it's very useful, since among many things you can "zoom" in/out or generation, and decide what gets "cropped out" and how it fits on your "canvas"
Regarding LCM, you don't have to have a model trained for it, there are LCM weight loras, for both SD15 and XL, which lets you use any model and you can create images at less steps and cfg. You use them as any other lora and they work pretty well.
You must be registered to see the links
You must be registered to see the links
There's a Lora for SDXL Turbo too
You must be registered to see the links
And one that combine both LCM and Turbo
You must be registered to see the links
My bad. I have updated the post with the link. Should not be an issue now that we both have linked to it.Edit:
Since i had to go look for the vae Mr-Fox mention (compulsive need to try new things), it wasn't that easy to find right away, too many model version listed, so here'sYou must be registered to see the links
Oh, never see lcm models. And never try it, is possible to port standard .safetensors to lcm? benefit?That's some mighty fine Wendigo Erotica. A little tip or just observation, the LCM sampler will not give a good result if you are not using it with a LCM checkpoint. Also, if you use a resolution 1024 wih SD1.5 you are more likely to get conjoined twins. I would recommend to use 960x640 and then use either hiresfix or upscale in img2img with SD Upscale script. I know for a fact that you are already aware. This is only a reminder and for anyone else that might not be aware.
Wow, love these type of post, is really useful for me to have a good idea on how it work.thank you.There are not that many LCM checkpoints available as of yet compared to "normal" SD1.5 and SDXL, though indeed a few.
You must be registered to see the links
The point of LCM (Latent Consistency Model) is to be able to run fewer steps and lower cfg scale to cut down on generation time but still get a high quality.
The rule of thumb is 6-12 steps and 1-4 cfg scale. 10 steps and 1-2 cfg scale seems to be good with most models.
I ran a few ckpt compare tests with plot script and I borrowed the prompt of the great Devilkkw's delicious Cryptid Babe.
SD1.5 LCM 1024x1280 (notice the tendency for conjoined twins):
View attachment 3267793
View attachment 3267794
View attachment 3267795
SD1.5 LCM 640x960 (notice the absence of conjoined twins):
View attachment 3267796 View attachment 3267797 View attachment 3267799
There are also XL LCM models. As most know you can use a higher resolution with XL models.
The rule of thumb is that the resolution should be equal to 2048. You can try different ratios. One that I have found to work well for me is 896x1152.
View attachment 3267808
(Thanks to the eminent Synalon for providing this list).
SDXL LCM 896X1152:
View attachment 3268390
A tip is to never use the standard VAE that was released with the first SDXL model, it's slow as hell.
I recommend fenrisxl VAE instead, it's faster. SDXL LCM is till much slower in general though compared to normal SD1.5 or LCM, at least with an older GPU like the 1070 card I have.
You must be registered to see the links
SDXL LCM with fenrisxl VAE 896X1152:
View attachment 3268594
So beautiful result's. i'm glade you used my prompt for sample. And good test, variation seed is bit underestimate,keep testing and share. I'm really interested on it.Bonus.
(twigs and pine cones included)..
I hope the excellent Devilkkw doesn't mind I keep posting with his prompt, had too much fun to stop.
The other day me and the eminent Synalon experimented with and explored variation seed, variation strength and more importantly variation resolution.
This is what the tip text says about it:
View attachment 3268917
This means that you can generate an image in landscape without getting eldritch monsters by setting the main resolution in landscape ratio and variation resolution in portrait ratio. See examples below.
View attachment 3268862 View attachment 3268863
View attachment 3268882 View attachment 3268928
Too general, what are your pc spec? what are you using for train? video driver version?Hey thanks for the support. I don't think my video card has enough juice to train a lora.. or i'm not doing it properly..
I've watched a few walkthrough videos, but I can't seem to figure it out..
I have no idea.Oh, never see lcm models. And never try it, is possible to port standard .safetensors to lcm? benefit?
As I don't use the spagetti ui I can't help you with ComfyUi.A good reminder you say Mr-Fox, sd 1.5 model work great with low res, and push it high is really a pain in the ass, many models get double and weird result at 768, so generate and upscale seem good solution.
But i have to ask a question: i use my model merged many times with merge block weight in a111, to push out resolution, but in a1111 max resolution reach 896x1152 and in CUI i reach 1024x1280. why so much different?
I also checked sampling method code, and seem work different in CUI and in a1111, but if the sampler is the same, why?
Way too many probably..Wow, love these type of post, is really useful for me to have a good idea on how it work.thank you.
A ot question: how many checkpoint you have?
I'm glad you liked it.So beautiful result's. i'm glade you used my prompt for sample. And good test, variation seed is bit underestimate,keep testing and share. I'm really interested on it.