I honestly forgot about that because I installed it so long ago, but you are 100% correct.I think lots of components the Manager tries to install require Microsoft Build tools as a prerequisite. If you give it another go, do start by installing those runtimes.
I am on my tenth CUI install cause there were issues halfway in each of those.
So, base kinda needs to look like this to have a decent chance of using CUI's advanced nodes:
View attachment 3133838
just something i noticed when looking at the image so i might be misreading things, but it says "enabled: off" in the nodeI can't get face swap to work - wf attached.
Any ideas how to start troubleshooting? It just doesnt's work and there are no error messages to look up.
View attachment 3133850
"Enabled: off"? - not the clearest thing ever. How to tell you're working with an open source project!just something i noticed when looking at the image so i might be misreading things, but it says "enabled: off" in the node
you're possibly aware but there are several "lora stacking" nodes that lets you easily have multiple loras in the same node with slight difference in options. There's one inFurther to this, here is where you combine a file load and file preview into a single node.
Meaning you can add file preview to any node with an image immediately addressing some clutter.
Yea, that's a brilliant feature.
The next candidate are face restore modules and all those VAE patches. Then LORAs and the checkpoint, controlnets, on and on.
View attachment 3134641
comfyroll
and one in efficiency
i believe, can't remember if there was more atm.Sure, here's my HS2-IRL babe converter. Ping me again in a week or so, it should become better still as this is massively WIP and has two or three bugs. Still, completely fit for purpose as of rhiinow.Hey Sepheyer, sorry to "ping" you, but I've just seen this post from you, and as I'm currently trying to get something like this working right now, I though I just asked if you're willing to share your workflow on how you'd achieve that?
So here's what I'm trying to achieve: take any DAZ rendered scene from a game and turn it into a more photo like style.
I've watched so many ComfyUI tutorials in the past couple of days, and I remember seeing one in which a node was used to generate a "positive prompt" for any loaded image (but of course I can't find that specific tutorial anymore ). Because I don't want to enter any prompt manually, I just want to load any rendered (or even drawn?) image, extract a "positive prompt" out of it and then feed it into a sampler using a photo realism model to basically generate a "real" copy of that image. I might need to add a "combined/concatenated" positive prompt in case I need to add some specifics, but I want to avoid that as much as possible.
I did see a lot of tutorials doing the quite opposite (turning a photo into a drawing or pencil sketch), but I haven't found a single one doing it the other way round. Well, I found one workflow on CivitAI that looked promising, but it was based on text-2-image and not image-2-image...
If you could share any details on how you would do that, that would be awesome!
Thanks a lot in advance.
This thing can do hair, faces, cloithes:And here's yet another question: It's quite easy and fun to replace faces in images, and I'm literally blown away how easy it was. I replaced some faces in memes with some friend's faces while streaming to them on Discord, and we had so much fun
But the method I used so far only replaces the actual face, not the hair. Is there any way to replace the face and the hair in an image, do I have to use the mask feature to achieve that?
Again, thanks a lot in advance to whomever has some advice. And sorry if those are still noob-ish like questions, I've searched through this thread (and other tutorials) but wasn't able to find a working solution...
Unfortunately I haven't saved my workflow, but I will try to recreate it and then post it. Thanks a lot for your workflows though! I will test them as soon as I got some more timeCan you post your face swap w/f?
So cool. I'm very happy to see my lora being used. Those skips and jumps makes me think of old silent black and white movies Charlie Chaplin etc. You could say it's artistic choice and retro vintage style. There fixed..Going by timestamps on the first and last image, it took over 15 hours to generate all the images, so next time i'll try something shorter
So the basic idea was to take video/clip of someone "dancing", split that into frames and process those to get the poses. In this case that gave >1400 pose images, some had to be cleaned out due to bad capturing or other issues, leaving slightly below 1400.
Then i'd use those images to generate a character using those pose images and hope things worked out. Since the pose is all you really care about you can keep a fixed seed, but it still causes quite the variance in output. So trying to keep a simple background is a bit of a challenge. In hindsight i probably should have used a lora of a character that had a completely fixed look, including outfit, but as this was just intended as a concept/idea test it doesn't matter that much.
I'd intentionally set things up so that i had each pose as a separate file and to keep every generated image in the same numerical order and not depend on counters/batch processing in case things broke or i had to redo some specific image.
Looping through each pose is simple enough anyway and not loading everything into memory also helps with potential OOM issues.
To save time and to test LCM while i were at it, i used just a normal SD1.5 model with a LCM weight lora, so images were generated in just 5 steps, same with face restore. So in this case that lora did a fair job.
So after merging all the frames back together and some processing, i had a ~42sec 60fps clip of a AI woman roughly moving in the expected way and with some additional arm swinging and head warping due to not fixing enough of the pose images and prompt.
I can't post the file in full on forum due to size, adding a downscaled 24fps version and 2 full size images. There's odd jumps/cuts and movements due to frames having to be cut out in poses or bad generating. This wasn't a test of how perfect it would be, but "will it work", so i didn't bother fixing all those things. And tbh with 1400 poses and the same amount of images, i'd rather not go over all of them multiple times just for this type of test.
There are at least some sections that aren't too bad.
View attachment 3136645
View attachment 3136647
View attachment 3136646
Too keep this from becoming even more of a wall-of-text i'll put the rest in spoilers should anyone care for details.
You don't have permission to view the spoiler content. Log in or register now.You don't have permission to view the spoiler content. Log in or register now.
Credit to Mr-Fox for his Kendra lora, usually has a download link in his sig. Though can really say these images do her justice, but any PR is good PR right...
They key in that conversion workflow is the setting of the contolnet - how strong you want it and how many steps it should be applied for (between 0-100% of the steps). Both settings at maximum will get you "exact" replica, while with lower settings the CUY will use more of the text promp rather than control net and thus will "dream" more.Unfortunately I haven't saved my workflow, but I will try to recreate it and then post it. Thanks a lot for your workflows though! I will test them as soon as I got some more time
// Edit: I was able to separate the hair from the model using your workflow, but I'm also still trying to figure out how to replace it in the original image...
Thanks again for your workflow -- I've tried it, but it's not doing what I want it to do: change a DAZ render into a "photo".They key in that conversion workflow is the setting of the contolnet...
Can you post your DAZ render please?Thanks again for your workflow -- I've tried it, but it's not doing what I want it to do: change a DAZ render into a "photo".
I've tried tuning the strength of the controlnet nodes, but it's doing all kinda stuff, because (I assume) you still have to add a lot of prompts manually on what you see in the original picture to get a somehow decent result.
Here's a quick idea on how I do want my workflow to look like:
View attachment 3138236
I might need to add more nodes to the workflow to downscale/upscale/inpaint/outpaint the images etc., but you might get the basic idea of what I'm trying to achieve. The issue is the red node, which I know exists, but I don't know what it's called and which custom nodes I'll have to install to get that specific node.
Sure -- it's an image from the game Lust Theory I'm currently experimenting with:Can you post your DAZ render please?