- Sep 24, 2019
- 1,145
- 1,957
I will try that later, thanks! So instead of the save image node I replace it just with a preview node?Then do disable the saving node and test the workflow again.
I will try that later, thanks! So instead of the save image node I replace it just with a preview node?Then do disable the saving node and test the workflow again.
me3 answer propely.May I ask where do you get that "show text" node?
Anyone knows how to use refiner in CUI, I haven't found that but I think there is a work around
I watch CUI on Youtube channel of Scott... while he had a series, I wish he made it more organized, like txt2img, img2img, IPAdapter, controlnet,...
Did you check temperature? pushing out image is the part where your GPU/CPU is stressed, and if temperature go over certain limit( sometimes you set it on the bios) the pc shot down to prevent damage.Unfortunately my computer still sometimes (not always) shuts off when I am using the upscaler. But now because of the nodes I can see where it exactly happens.. it's on the very last second when the upscaled picture reached 99% and would finally appear in the image view node / being saved on the computer.
Really weird. I thought previously it might have been a hardware related issue, but this seems like it crashes when SD tries to finalize/create the file.
No, preview image work same as save image, but image is store in a temp folder that is cleaned every time you run CUI.I will try that later, thanks! So instead of the save image node I replace it just with a preview node?
Not with a tool, but by hand it felt warm but not overheated (not that you'd feel pain on your fingertips). It also only crashes when the upscaler is about to reach 100%, so that would be too much coincidence that it always overheats right in that second.Did you check temperature? pushing out image is the part where your GPU/CPU is stressed, and if temperature go over certain limit( sometimes you set it on the bios) the pc shot down to prevent damage.
So what I am supposed to do to troubleshoot that further?No, preview image work same as save image, but image is store in a temp folder that is cleaned every time you run CUI.
If this happens with comfyui, which upscale node is?Not with a tool, but by hand it felt warm but not overheated (not that you'd feel pain on your fingertips). It also only crashes when the upscaler is about to reach 100%, so that would be too much coincidence that it always overheats right in that second.
So what I am supposed to do to troubleshoot that further?
Edit: People suggest to add " --lowvram " somewhere, but they never mention where (only found a thread about macos, but I am on windows).
The Ksampler again, here it crashes at 99%If this happens with comfyui, which upscale node is?
I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.--lowvram you add to when launching comfy, if you start it through commandline you add it after the bat file name. if you start by double clicking the bat you need to edit it slightly and add the option at the end of the line "launching" main.py
There can be some memory spikes at the end of sampler operations, so i guess there's a chance it's related to a vram overflow or an offload to ram. There could be some kind of memory access violation, but i'm not sure why/how.The Ksampler again, here it crashes at 99%
View attachment 3310601
I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
Thanks! I dowloaded Nvidia Studio instead of the game ready driver, after my last post, and for the past 6 generations I had no crash so far. Fingers crossed. If it happens again, I'll try your advice!There can be some memory spikes at the end of sampler operations, so i guess there's a chance it's related to a vram overflow or an offload to ram. There could be some kind of memory access violation, but i'm not sure why/how.
If your card is nvidia i'd recommend updating the driver and checking what the "memory overflow" setting is set to.
In the 3d settings of the nvidia control panel there should be a setting called something like "CUDA system fallback policy".
System fallback allows overflow from vram to ram if you "run out", no fallback obviously makes you get a OOM when running out of vram.
You can set this for comfy specifically by using the program specific setting and adding the python exe used by comfy. This is possibly found in the python_embeded folder in comfy.
What is the name of the pack containing required node and how do you which model can generate NSFW content?For who like gpt chat, this is a simple workflow for using gpt model in cui.
It's also possible to connect it to image sampler to generate image, but i share a really base workoflow so you personalize it as you want.
Note: after install it's required to download gpt model in gguf format, and place in "ComfyUI\models\GPTcheckpoints".
A good place for download gguf model isYou must be registered to see the links.
The Ksampler again, here it crashes at 99%
View attachment 3310601
I just read a post that claims that the "newer" (7 months ago) version of comfyUI would automatically run in low vram mode for low vram cards.. so I think this isn't required anymore.
With the Nvidia studio driver instead of the game ready I've been crash free for the last 20+ generations I hope it stays like that.I would recommend the "efficient pack", it may save you some node vae decode for example.
Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.With the Nvidia studio driver instead of the game ready I've been crash free for the last 20+ generations I hope it stays like that.
Edit: Nvm, crashed again now It again happened in the last bit when ksampler is done with upscaling and the VAE decode creates the final image.
Have to look into this tomorrow..
View attachment 3310755
Yes I read about that tiled vae decode note, I wanted to give it a try later. Right now I found out that the installation of another program changed my power plan on the computer for itself to a custom one. So I tried now the general "balanced" power plan vor AMD Ryzen CPUs and see if that changes something. I could also try the full power plan if that doesn't work out. And then, next up I'd try the tiled vae.Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.
Another thing would be to removed both the save image and vae decode and just use a save latent node, that way you remove any vae and you still have a "save to output folder" action.
Hmm, good idea!Not sure if you've already tried it but it could be worth seeing how things go in a different browser. Edge, chrome and opera are closely related, but they might have some small difference that is involved. Firefox will have more underlaying differences. So there's some options.
Not for my card specifically but for the issue of shutting down (some people have that even with the newest RTX 3090 models). But the solutions seems to be individual for all of them. One had another program causing the crashes that I don't have, for someone else it was the power plan changed to "balanced" because of his charging cable (however that's possible), and another one had some programmer extension going on that I don't even have.Have you tried searching on the comfyui github issues and/or discussions for your card? might be others that has had some kind of issue, might not have to be the same kind but the "fix" might be similar.
This seems to work! At least I made 31 upscaled generations without any shutdown.Just to try and narrow down what is going on, you could try replacing the vae decode node with a tiled version, leaving that at 512 tile size should remove any chance of vram overflow and it won't really have any negative impacts.
What is the name of the pack containing required node and how do you which model can generate NSFW content?
Don't know if is standard, but in my CUI if memory required is not enough, it automatically switch to tiled VAE.This seems to work! At least I made 31 upscaled generations without any shutdown.
Does that mean my GPU had not enough vram? Despite my task manager showed 5/8 gb in usage, at peak?
Also, is there a downside that comes with tiled decode note? What does it do different?
is kkw-ph1 & its neg embeddings yours?You must be registered to see the linksis the node needed, and NSFW is depending on model you download, i downloaded random model with low size (4Gb) and seem work with nsfw, but there are many model over 30Gb! but is too big for me.
Don't know if is standard, but in my CUI if memory required is not enough, it automatically switch to tiled VAE.
Also important is driver version, in the last driver you have option to chose if redirect mem to ram if not enough, you find it on Nvidia control panel, it's called CUDA Fallback.
This passes f95's standards for renders being post-puberty.was looking for small breasts model of women. but this look barely legal. some questionable sample images if u scroll down far enough. got a feeling a few youngsters images were part of the training. it got banned on other model sharing sites. what say you?
You must be registered to see the links
There generally isn't any downside to the tiled version, it can be faster in many cases, even if you have enough vram. All it does is break down the job into pieces and does one at the time.This seems to work! At least I made 31 upscaled generations without any shutdown.
Does that mean my GPU had not enough vram? Despite my task manager showed 5/8 gb in usage, at peak?
Also, is there a downside that comes with tiled decode note? What does it do different?
There actually is a downside to using the tiled version: it shifts the colors a little bit in the resulting image, adding contrast and saturation slightly but noticeably on each pass. I actually reported that as a bug like a year ago to Comfy's developer, and he said it was a known issue but he didn't had a fix for it... and so it was never fixed.There generally isn't any downside to the tiled version, it can be faster in many cases, even if you have enough vram. All it does is break down the job into pieces and does one at the time.
I doubt it's that you run out of vram consider you have more than me and it does have a "fallback to tiled" if you run out.
But if reducing the load is enough to fix your problem it's at least a very simple fix. Unfortunately it doesn't help much with the "why".
It's a bit hard to help debug since you're running a much newer card and setup than me and that means a whole bunch of new "oddities and quirks".
It could be a driver thing, it could be that your card require some kind of additional python lib, or just some setting that needs tweaking.
If you haven't already it could be worth looking up more general "setup instructions" for your card in relation to SD and AI.
Or if someone here has the same card and could offer more insight.