- Mar 19, 2020
- 2,091
- 2,627
Many thank you, added it. I'm impressed in number of extension CUI have !There are Custom Nodes that can either rescale or resize your image, and you can also enter any factor by which you want to rescale/resize. Here's just one example:
View attachment 3229188
Yes, i just changend the node upscaler with the one suggested by theMickey_Any chance you can post the file with the actual workflow? It is much simplier for me to pop it into my CUI, make changes and post it back.
Two items, may be you know them already:Many thank you, added it. I'm impressed in number of extension CUI have !
Yes, i just changend the node upscaler with the one suggested by theMickey_
I zipped it because json upload not allowed.
So, another way to upscale is via latent upscale rather than using a pixel upscaler.Ok guy's, i'm really happy you have made me moving to Comfy, after some experiment
The real size image was too big to post here, i reduced it at 50%.You don't have permission to view the spoiler content. Log in or register now.
And this is my question: how to control upscaler model? In a1111 i have option to chose resize, but i don't find any option in comfy.
Also this is how is made, hope node is correct, i'm just experimentig.
You don't have permission to view the spoiler content. Log in or register now.
Yes, i was experimenting and just doing some test, added latent upscaler and second pass, seem good to reduce time, but for what i see, is really difficult obtain same result as a1111.So, another way to upscale is via latent upscale rather than using a pixel upscaler.
Namely you run a sampler off an empty latent, then upscale the latent the sampler produces, and then run that latent again using the prompt and the model. I attached the workflow - you might know this tho.
Why would you do it rather than using pixel upscale. Main reason is to fill in the additional details using the actual model. Another reason is that some LORAs can burn the first output rather badly and you want to re-refine the latent using the original mode. Thus, you run the first sampler with the LORA and then you re-run the same latent, after a small denoise, using pure checkpoint. Or vise versa - you first run the sampler with the pure checkpoint and then only once upscaled you throw in LORA for refinement.
I attached your workflow where I added two additional samplers. Naturally, you want to have preview nodes thrown in to see what each sampler does - cause this way you can safely downsize the number of steps. There are two latent upscalers - regular one and NNLatent. Each has its own quirks. For 80% you want NNLatent, but sometimes the original latent upscaler is the cat's meow. There are three samplers. The first one uses the original chekpoint, the next two samplers use LORA's models.
View attachment 3229571
Click on "Load" in the sidemenu, find the "png" file and open it.Yes, i was experimenting and just doing some test, added latent upscaler and second pass, seem good to reduce time, but for what i see, is really difficult obtain same result as a1111.
In prompt i've attached, result in CUI are same as your, in a1111 is more close to prompt, because woman have really robotic suite, every render. in cui i get robitic think every 50 test
my inexperience of course, experimenting is the key to find what work better.
thank you for suggestion, i will test it soon.
Another really noob question: how load workflow from png?
I've started today using CUI, i always use a1111 and switch is not so fast, but after some try and understanding how it work, you start like it much as a1111.I'm just getting into starting my first VN, learning Ren'py and I am pretty sure AI art as I'm shit at any art myself. It seems to me that A1111 is not nice to a new user, but that it probably has more power and flexibility but CUI is easier to work with? Before I put any more time into A1111, is there a good reason to consider CUI for my assets?
Regarding your generated images: Post some samples and questions about what specifically you are concerned aboutSo, I wanted to give Stable Diffusion a try because it made me want to create an Rpg Maker story about succubus with a certain artist Kainkout, it would only be used as a reference to create the story or personal use. (maybe commission if I feel motivated enough)
The point is, this Lora by the same artist:
You must be registered to see the links
I have been creating images and the results are good, but I have the doubt if it can be improved in terms of sampling and upscaler and the general options that are used.
Also, is it a good idea to use checkpoint tags for better results?
And another thing, is it worth continuing to use WebUi or get better results with ComfyUI? Also ask about the controlnet implementation.
There is not hurry and this only for curiosity and a surge bcuz, bought rpg maker on steam on sale.
On the basis that there's too many differences in what is done in the "path" to each picture, i'm leaning towards "probably not".Is these a good way for testing lora effect in CUI?What i mean i see image is affected with lora, but i want to know if is a good way or there are some other method work better.You don't have permission to view the spoiler content. Log in or register now.
Also what i want is see only lora effect without any other embedding or lora, just testinting one lora at time.
Another question: i do same test in a1111 and i start getting similar result, except some lora, they gave me totally different style.
For example i've trained different style lora, one of this is a drawing style, in a1111 i get result i want, in CUI no, image result is realistic again and seem do not use lora. Can someone more skilled in cui have idea why some lora work in same way and some no?
On the basis that there's too many differences in what is done in the "path" to each picture, i'm leaning towards "probably not".
Some things might be an oversight and easy to fix, but your seed isn't the same, nor is cfg. While these are likely small differences they can be more than enough to throw off any comparison.
Second issue is that you're technically not running the same prompt, any difference in prompt can have very "altering" affects on output, even if they make no sense, so even if the AI doesn't natively understand your trigger word without the lora, it'll still have an impact. So to know for sure that your lora and/or trigger word is working (and how) you'd need to include that in both images as well. You probably want to do 2 tests, one with and one without the trigger word (for both images). That way you can see how the lora is applied and if the trigger word is actually needed or if it creates a "artistic difference" that might be useful in itself.
For you issue of Loras not working. One thing i've noticed with some lora nodes is that if you make changes to them, like changing strengths, Comfyui seems to not "re run" them but sticks to its cached version. You can spot this if you track execution and you see it's starting at a later stage than where the lora is loaded. For me this was very bad when using lora stacks, hopefully this has been fixed recently, but just follow the execution and you should spot it.
Also, check the console output, might say something about loading errors.
Thank you guy's. i'm just trying and have discovered an extension called