- Dec 23, 2018
- 494
- 2,039
There are dozens of us! Dozens!!No unfortunately I'm a stubborn boneheaded type that has refused so far to give in to the spagetti lord almighty.
There are dozens of us! Dozens!!No unfortunately I'm a stubborn boneheaded type that has refused so far to give in to the spagetti lord almighty.
Please, can you post the workflow? I see this looks like image-2-image, or at least an OpenPose. Still, interested to see how iterative upscale was done.Been testing some detailing (and upscaling).
First image is just detailing parts of the image, in this case the woman and armor, while leaving the background untouched.
Second image is a iterative upscaling with very low and decreasing denoising. The background becomes very "busy", and when testing there was cases where it became almost oversaturated because the colors became rather heavy/dominating.
Looks like it might be more masks needed in the future...if only there was better mask editors in the UIs....
View attachment 3307430 View attachment 3307429
You don't have permission to view the spoiler content. Log in or register now.
Will have to finish the current run first at least. Apparently heunpp seem to run HORRIBLY slow on SDXL, it didn't seem that bad on SD1.5...Please, can you post the workflow? I see this looks like image-2-image, or at least an OpenPose. Still, interested to see how iterative upscale was done.
Very interesting, i saw sampler code and some difference are on it. But i'm not so skilled to understand every line of code, just i like see different approach to get good result.There are different prompt interpreters in comfyui, one of which copies the one used in a1111.
There's also nodes that setup settings etc to be the same or close to a1111.
Another thing is that by default a1111 uses GPU for things like seed/randomness, comfy default is CPU. You can change this in both UIs and you can see there can be a very large difference between the two options. Comfy can do this on a per sampler node basis if you used the correct one.
Regarding prompting, i haven't used a1111 for XL and not checked the code, but i believe it still dumps the same prompt into both text encoders, there's people that argue this the absolute way of doing it, but if you try feeding different prompting styles or just parts of the prompt to each encoder you quickly see that you can use this to your advantage.
I really don't think ppl should lock themselves too much into one style or way of writing prompts, that will quickly turn into the mess ppl just keep dumping into their negatives.
Creativity is "freedom".
Thanks man! Found the time to work through the tutorial now. Needed to watch other ones for the installation first, but then went with your link. I set up a basic workflow with 3 different outputs. It works faster and smoother than A1 did in my memory, and personally I think the nodes and cables are waaaaay more fun.Learn ComfyUi part one of a series by Olivio Sarikas:You must be registered to see the links
Sepheyer and me3, also a couple more are our local ComfyUi experts. I'm sure they will help you also.
Who knows me and Jim might be persuaded to give it a go also. I know that I for one is intrigued and curious about it but out of boneheaded-ness not taken the plunge yet.![]()
May I ask where do you get that "show text" node?
If you have the manager installed, search for a node pack called "Comfyui custom scripts", it's made by someone named "pythongosssss" (might be a few s more or less). It has a bunch of useful nodes and other features.May I ask where do you get that "show text" node?
Anyone knows how to use refiner in CUI, I haven't found that but I think there is a work around
I watch CUI on Youtube channel of Scott... while he had a series, I wish he made it more organized, like txt2img, img2img, IPAdapter, controlnet,...
Thanks man! Found the time to work through the tutorial now. Needed to watch other ones for the installation first, but then went with your link. I set up a basic workflow with 3 different outputs. It works faster and smoother than A1 did in my memory, and personally I think the nodes and cables are waaaaay more fun.
When I copied the output windows two times (the 4 in a row) it for whatever reason did not connect some of the cables, but those that were missing were marked with a red circle. So I connected them again manually, I find that this visual cue is pretty nice and it shows that they thought about a good UX (at least so far).
I am not at how to use upscalers yet, but this is what my sessions looks like right now. I've been working with other software before, that also makes use of complex routings and modules / nodes, so working this way feels like home for me. And like I said, it's a lot of fun! Feels more personal than the clunky A1 UI.
View attachment 3307890
You can troubleshoot it yet further - rather than saving to file you can keep the workflow at the preview level.Unfortunately my computer still sometimes (not always) shuts off when I am using the upscaler. But now because of the nodes I can see where it exactly happens.. it's on the very last second when the upscaled picture reached 99% and would finally appear in the image view node / being saved on the computer.
Really weird. I thought previously it might have been a hardware related issue, but this seems like it crashes when SD tries to finalize/create the file.
I will try that later, thanks! So instead of the save image node I replace it just with a preview node?Then do disable the saving node and test the workflow again.
me3 answer propely.May I ask where do you get that "show text" node?
Anyone knows how to use refiner in CUI, I haven't found that but I think there is a work around
I watch CUI on Youtube channel of Scott... while he had a series, I wish he made it more organized, like txt2img, img2img, IPAdapter, controlnet,...
Did you check temperature? pushing out image is the part where your GPU/CPU is stressed, and if temperature go over certain limit( sometimes you set it on the bios) the pc shot down to prevent damage.Unfortunately my computer still sometimes (not always) shuts off when I am using the upscaler. But now because of the nodes I can see where it exactly happens.. it's on the very last second when the upscaled picture reached 99% and would finally appear in the image view node / being saved on the computer.
Really weird. I thought previously it might have been a hardware related issue, but this seems like it crashes when SD tries to finalize/create the file.
No, preview image work same as save image, but image is store in a temp folder that is cleaned every time you run CUI.I will try that later, thanks! So instead of the save image node I replace it just with a preview node?
Not with a tool, but by hand it felt warm but not overheated (not that you'd feel pain on your fingertips). It also only crashes when the upscaler is about to reach 100%, so that would be too much coincidence that it always overheats right in that second.Did you check temperature? pushing out image is the part where your GPU/CPU is stressed, and if temperature go over certain limit( sometimes you set it on the bios) the pc shot down to prevent damage.
So what I am supposed to do to troubleshoot that further?No, preview image work same as save image, but image is store in a temp folder that is cleaned every time you run CUI.
If this happens with comfyui, which upscale node is?Not with a tool, but by hand it felt warm but not overheated (not that you'd feel pain on your fingertips). It also only crashes when the upscaler is about to reach 100%, so that would be too much coincidence that it always overheats right in that second.
So what I am supposed to do to troubleshoot that further?
Edit: People suggest to add " --lowvram " somewhere, but they never mention where (only found a thread about macos, but I am on windows).
The Ksampler again, here it crashes at 99%If this happens with comfyui, which upscale node is?
I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.--lowvram you add to when launching comfy, if you start it through commandline you add it after the bat file name. if you start by double clicking the bat you need to edit it slightly and add the option at the end of the line "launching" main.py
There can be some memory spikes at the end of sampler operations, so i guess there's a chance it's related to a vram overflow or an offload to ram. There could be some kind of memory access violation, but i'm not sure why/how.The Ksampler again, here it crashes at 99%
View attachment 3310601
I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
Thanks! I dowloaded Nvidia Studio instead of the game ready driver, after my last post, and for the past 6 generations I had no crash so far. Fingers crossed. If it happens again, I'll try your advice!There can be some memory spikes at the end of sampler operations, so i guess there's a chance it's related to a vram overflow or an offload to ram. There could be some kind of memory access violation, but i'm not sure why/how.
If your card is nvidia i'd recommend updating the driver and checking what the "memory overflow" setting is set to.
In the 3d settings of the nvidia control panel there should be a setting called something like "CUDA system fallback policy".
System fallback allows overflow from vram to ram if you "run out", no fallback obviously makes you get a OOM when running out of vram.
You can set this for comfy specifically by using the program specific setting and adding the python exe used by comfy. This is possibly found in the python_embeded folder in comfy.