Sepheyer
Well-Known Member
- Dec 21, 2020
- 1,709
- 4,192
- 448
Trying this with my 3090 but get a crash when it tries to load the "qwen_image_edit_fp8_e4m3fn" 20b model. That's even after trying to offload the CLIP text encoding to CPU (which seems to work). 20B params ~ 20GB safetensors file on disk... seems like it takes up too much VRAMQwen Image-to-Image
Qwen Edit is out
Looks like a killer
I have a 4070 with 12GB.Trying this with my 3090 but get a crash when it tries to load the "qwen_image_edit_fp8_e4m3fn" 20b model. That's even after trying to offload the CLIP text encoding to CPU (which seems to work). 20B params ~ 20GB safetensors file on disk... seems like it takes up too much VRAM
What hardware are you running on? 5090 with 32GB vram?
Yeah I'm already using "ComfyUI_windows_portable", based on your previous advice and my own experience with python/pytorch etc library conflicts.I have a 4070 with 12GB.
But my CUI is the "portable version" where the install is insulated from the rest of the software. All other versions of CUI (git and desktop) were giving me all kinds of issues, including memory glitches, that I eventually gave up on.
Oh built in bewbz? Sold! I thought it was going to be like Flux where the human body is as unknown as it is to a religous person pre-marriage, so I never checked it outI have a 4070 with 12GB.
But my CUI is the "portable version" where the install is insulated from the rest of the software. All other versions of CUI (git and desktop) were giving me all kinds of issues, including memory glitches, that I eventually gave up on.
So, if you don't have CUI PV, do give it a shot:
You must be registered to see the links
Qwen Image and Qwen Edit are quite the breakthru for a solo-dev game art, giving arguably pretty consistent results out of the box
View attachment 5186924
Indeed, I am running FP8 - the few workflows I posted here show there is no mistake. So, in theory you should be able to run it too - I trust it is merely a matter of finding the right CUI switches?Don't really understand how you get the FP8 version running on a 12GB card... is that really correct? or do you have a different model instead of the "qwen_image_edit_fp8_e4m3fn.safetensors" mentioned in the link you posted (You must be registered to see the links
I tried your workflow and got an error aboutMy dudes, WAN 2.2 is the tits. Zip file contains the workflow. This is image-to-video, eight minutes to generate, needs a few large models downloaded (about 20GB total).
Looks like your CUY needs to be updated. If you run a "portable" version, then run the update from the respective folder - the latest will be downloaded and you'll have WAN nodes and workflows added to the templates list.I tried your workflow and got an error about
"
Missing Node Types
When loading the graph, the following node types were not found
"
- SaveVideo
- CreateVideo
Can you please let me know what node(s) or extensions you installed for the two node types? It's mentioning Comfy-Core but I can't find it anywhere...
You can try either theHey, i'm currently learning stable diffusion. And i would like to know how to generate multiple characters consistently with their details intact. I'm having a hard time learning that.
You can start with what you have. Note, AI will put lotsa wear and tear on your card to the point the laptop might stop working.hi, Im new to SD and I want to start making AI images and vids that are more on the beast/monster side. I have a laptop with a 1060 6GB. Would it be best for me to wait until I have a better PC or I can start doing stuff already?. I havent installed anything yet just incase
might wait then, I only have this laptop for gaming and such. Thanks for the tipsYou can start with what you have. Note, AI will put lotsa wear and tear on your card to the point the laptop might stop working.
Anyways, you do want at least 12GB VRAM and a card that starts at 30X0 for proper AI work.
Your current setup will allow you to run basic concepts from 2023; but vids are pretty much a no go zone because of how long they take to render.
If your card allows you then you want the Illustrious-type checkpoints. They are the leader now in terms of sweet-spot of casual AI. And when you set charas against white background then the images are just perfect for a VN. Additionally it understands things like (wind:1.4) or (wind lift:1.4) which makes for nice effects. Plus there is a bunch of other goodies too numerous to describe that you will discover once you start working with Illustrious.Thank everybody for the several examples, about the possibilities of AI-generation.
During those last weeks, I've been playing around A1111, ComfyUI, InvokeAI. I think I will settle for ComfyUI.Some tests already done, with SD, and a bit of SDXL. My graphic card seems enough for reasonable time generation.
But now that I've a VN being drafted for a while, I'm looking for some template for generating the characters. For now, I'm settling with classic VN style (between manga and anime), or old school VN. I will see later for the... "juicy" parts.
I'm new to Comfy. Qwen dragged me away from Forge. Could you share one of those workflows that has a reference image for QIE? The demo workflow works great for prompted edits but I don't know how to add a reference/conrol image like you've done here.The new "Qwen Edit - 2509" is a monster. Killa from Manilla.
Def opens up new horizons for compositions, couple poses, etc, etc.
View attachment 5279985
View attachment 5280843
View attachment 5280920