- Jan 24, 2020
- 1,401
- 3,804
now that's some probing..ControlNet struggles cracking couples sex too.
View attachment 3211461
now that's some probing..ControlNet struggles cracking couples sex too.
View attachment 3211461
I love the details in your work. You obviously pay a lot of attention to those... fingers, clothes and folds, the ships. There seems to be some possible stitching going on of different parts of the image, are you using PS or other editor outside of SD? How do you use it, at what part of your workflow? I'm enjoying reading your tutorials and what you do."Yo ho, a pirates wife for me..."
View attachment 3215564 View attachment 3215563
"...and a bottle of rum"
View attachment 3215565
Might have gotten the lyrics slightly wrong
You don't have permission to view the spoiler content. Log in or register now.
It's all SD, i don't use outside editors for anything else than if i need to downscale/crop or "compress" images.I love the details in your work. You obviously pay a lot of attention to those... fingers, clothes and folds, the ships. There seems to be some possible stitching going on of different parts of the image, are you using PS or other editor outside of SD? How do you use it, at what part of your workflow? I'm enjoying reading your tutorials and what you do.
Any chance you can post the file with the actual workflow? It is much simplier for me to pop it into my CUI, make changes and post it back.Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
The real size image was too big to post here, i reduced it at 50%.You don't have permission to view the spoiler content. Log in or register now.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.
Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
Many thank you, added it. I'm impressed in number of extension CUI have !There are Custom Nodes that can either rescale or resize your image, and you can also enter any factor by which you want to rescale/resize. Here's just one example:
View attachment 3229188
Yes, i just changend the node upscaler with the one suggested by theMickey_Any chance you can post the file with the actual workflow? It is much simplier for me to pop it into my CUI, make changes and post it back.
Two items, may be you know them already:Many thank you, added it. I'm impressed in number of extension CUI have !
Yes, i just changend the node upscaler with the one suggested by theMickey_
I zipped it because json upload not allowed.
So, another way to upscale is via latent upscale rather than using a pixel upscaler.Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
The real size image was too big to post here, i reduced it at 50%.You don't have permission to view the spoiler content. Log in or register now.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.
Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
Yes, i was experimenting and just doing some test, added latent upscaler and second pass, seem good to reduce time, but for what i see, is really difficult obtain same result as a1111.So, another way to upscale is via latent upscale rather than using a pixel upscaler.
Namely you run a sampler off an empty latent, then upscale the latent the sampler produces, and then run that latent again using the prompt and the model. I attached the workflow - you might know this tho.
Why would you do it rather than using pixel upscale. Main reason is to fill in the additional details using the actual model. Another reason is that some LORAs can burn the first output rather badly and you want to re-refine the latent using the original mode. Thus, you run the first sampler with the LORA and then you re-run the same latent, after a small denoise, using pure checkpoint. Or vise versa - you first run the sampler with the pure checkpoint and then only once upscaled you throw in LORA for refinement.
I attached your workflow where I added two additional samplers. Naturally, you want to have preview nodes thrown in to see what each sampler does - cause this way you can safely downsize the number of steps. There are two latent upscalers - regular one and NNLatent. Each has its own quirks. For 80% you want NNLatent, but sometimes the original latent upscaler is the cat's meow. There are three samplers. The first one uses the original chekpoint, the next two samplers use LORA's models.
View attachment 3229571
Click on "Load" in the sidemenu, find the "png" file and open it.Yes, i was experimenting and just doing some test, added latent upscaler and second pass, seem good to reduce time, but for what i see, is really difficult obtain same result as a1111.
In prompt i've attached, result in CUI are same as your, in a1111 is more close to prompt, because woman have really robotic suite, every render. in cui i get robitic think every 50 test
my inexperience of course, experimenting is the key to find what work better.
thank you for suggestion, i will test it soon.
Another really noob question: how load workflow from png?
I've started today using CUI, i always use a1111 and switch is not so fast, but after some try and understanding how it work, you start like it much as a1111.I'm just getting into starting my first VN, learning Ren'py and I am pretty sure AI art as I'm shit at any art myself. It seems to me that A1111 is not nice to a new user, but that it probably has more power and flexibility but CUI is easier to work with? Before I put any more time into A1111, is there a good reason to consider CUI for my assets?