[Stable Diffusion] Prompt Sharing and Learning Thread

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
When I open controlnet in comfyui I got nothing to choose in the dropdown menu. I'd like to use control net, how do I download it via the manager? I found github pages but I don't know how current they are. But in the manager I can't seem to find a package that just installs all the basic things that would be part of control net.

I want to use openpose.

Is this the newest one?

I'm installing this right now through "install by git URL"..

Edit: Doesn't work. It said I had to restart for the changes to apply, I did, also hit refresh, control net node still shows nothing in its menu.
 

Synalon

Member
Jan 31, 2022
225
663
When I open controlnet in comfyui I got nothing to choose in the dropdown menu. I'd like to use control net, how do I download it via the manager? I found github pages but I don't know how current they are. But in the manager I can't seem to find a package that just installs all the basic things that would be part of control net.

I want to use openpose.

Is this the newest one?

I'm installing this right now through "install by git URL"..

Edit: Doesn't work. It said I had to restart for the changes to apply, I did, also hit refresh, control net node still shows nothing in its menu.
You may also need to download the model for it in the manager.
 

Synalon

Member
Jan 31, 2022
225
663
but it doesn't show when I type in open pose
Rather than trying to hit on the exact wording for the model with the search bar I find its easier to just click install models and then scroll down looking for controlnet under the column called "Base" and then downloading exactly what I need that way.

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

Screenshots of the models in the spoilers.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Rather than trying to hit on the exact wording for the model with the search bar I find its easier to just click install models and then scroll down looking for controlnet under the column called "Base" and then downloading exactly what I need that way.

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

Screenshots of the models in the spoilers.

Now it works, I didn't look under models but under nodes - thanks!
 
  • Like
Reactions: Synalon

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Hmm. Open Pose is not really considered by ComfyUI..

I took this:
1719345846535.png

looks like this:

1719345864563.png

What I got:

1719345885093.png

How can I "force" Comfyui to really use the poses I feed it with?

1719345919287.png
 

Synalon

Member
Jan 31, 2022
225
663
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
It looks like you are using the t2iadapter_openpose model, thats a different controlnet thing.

The one you need should be control_v11p_openpose_fp16.
Hmm, still weird results!

1719351339718.png

1719351366128.png

The pose clearly has someone crawling on all fours with both legs separated, not crossed over.. I also don't have anything in my prompts that would indicate something regarding the pose, that could interfere with the pose input.

1719351400152.png
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,145
1,957
Openpose isn't perfect, it has a lot of issues determining front and back legs etc
Really? When people do it on youtube they alway seem to get fantastic results.

1719353018049.png

Here, they did hit every single pose correctly. Isn't that the whole idea, so that SD can't miss because it follows the exact limb layout? Maybe we're not skilled enough..
 

Synalon

Member
Jan 31, 2022
225
663
Really? When people do it on youtube they alway seem to get fantastic results.

View attachment 3770533

Here, they did hit every single pose correctly. Isn't that the whole idea, so that SD can't miss because it follows the exact limb layout? Maybe we're not skilled enough..
Notice most the time the person is standing, facing forwards with the leg positions clearly defined.
This is also a workflow coming from a video so it has frames before and after to help keep the position clear.
Also without a clear prompt some checkpoints will fail to understand simple things like slightly turned looking back over her shoulder, so loras are needed for some of those.

Your pose on all fours is already something Stable Diffusion will fuck up given the chance, from the openpose skeleton it looks like her back should be towards the viewer slightly with her looking back.

The showing her back while looking over her shoulder confuses checkpoints a lot.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Really? When people do it on youtube they alway seem to get fantastic results.

View attachment 3770533

Here, they did hit every single pose correctly. Isn't that the whole idea, so that SD can't miss because it follows the exact limb layout? Maybe we're not skilled enough..
High chance of duds. Even more so on anything crawling, bending, etc. I used OpenPose heavily early on but eventually moved to other models - depth, canny and tile. May be you experimented with those already, but I would venture to say that you if you haven't then you will end up going with those over OpenPose once you try its alternatives.
 

devilkkw

Member
Mar 17, 2021
324
1,094
Really? When people do it on youtube they alway seem to get fantastic results.

View attachment 3770533

Here, they did hit every single pose correctly. Isn't that the whole idea, so that SD can't miss because it follows the exact limb layout? Maybe we're not skilled enough..
This is simple standing pose and working great, but for complex pose you need to use some trick.
I usually use openpose and depth map, and for more complex I add normal map.
I Made pose in daz3d.
This is what for me work well for posing with sd1.5.
 

devilkkw

Member
Mar 17, 2021
324
1,094
This is what i mean for multi controlnet.
kkwmulticnet.jpg-w.jpg

Made a pose in daz3d, fast render at same resolution (not required but i also try i2i) and load it, then apply it.
Use a resolution 1024 for each cnet node, because my image is 1024x1280.

Result image are simple prompt: nude old woman in kitchen.
Result showed on different checkpoint.
Most important with those method is how you combine cnet and strength, on my test max total strength is a value from 1 to 1.2.
Also order in conditioning combine is important: best i've found is showed in image:

Schematic view with strength:

depth(0.44)
openpose(0.4)|
....................|result
....................|normal(0.3)
.....................................|--->to positive sampler conditioning

Also i don't use Advanced controlnet because this affect negative prompt and i never get good result with multiple contronet.

This slow generation about 2x, but allow you simple prompt without pose description request.
 

Synalon

Member
Jan 31, 2022
225
663
00313-3530447844.png

Very quick using Forge, I suppose I could spend more time refining it to make it better but I'm lazy.

I used depth hand refiner, depth anything, and lineart, theres still lots wrong with it but as a base to work from that took less than 2 minutes its not bad.
 
  • Like
Reactions: Jimwalrus

felldude

Active Member
Aug 26, 2017
572
1,695
I have been playing around with PONY for awhile but I honestly might switch back to XL

I did a 2k training on 332 images of females flashing the camera in public places, (It was 600 images I pruned down the ones I thought wouldn't train well)

I took over an hour just to BLIP-2 Caption them, I was impressed with the BLIP-2 captioning on high quality highly complex images. I would not use it on a white background image however.

Here are some test images from 1024 up to 2k on the XXXL-V3 model

ComfyUI_01278_.png ComfyUI_01274_.png ComfyUI_01273_.png ComfyUI_01272_.png ComfyUI_01264_.png ComfyUI_01258_.png
 

crnisl

Active Member
Dec 23, 2018
752
575
I took over an hour just to BLIP-2 Caption them, I was impressed with the BLIP-2 captioning on high quality highly complex images. use
Have you tried clip interrogator?


Made a pose in daz3d, fast render at same resolution (not required but i also try i2i) and load it, then apply it.
Do you have any ideas, people, how to turn 3d images into high-realistic photo-like ones - but without losing the similarity/consistency with the original face and colors?
What I use so far is just inpainting eyes and mouth with realvisxl, then filmgrain.

But maybe you have much more clever ideas, or maybe even some magic comfyui configs?
Something to make the textures of skin/hair/clothes more realistic?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Have you tried clip interrogator?




Do you have any ideas, people, how to turn 3d images into high-realistic photo-like ones - but without losing the similarity/consistency with the original face and colors?
What I use so far is just inpainting eyes and mouth with realvisxl, then filmgrain.

But maybe you have much more clever ideas, or maybe even some magic comfyui configs?
Something to make the textures of skin/hair/clothes more realistic?
We had a scorching debate about likeness not a six month ago I think. The eventual consensus was the likeness -- facewise -- is NOT retained per se. To keep the face one needs to borderline "copy" it over. So, depending on your threshold for likeness this can work either very well or not at all:
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12670868
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12750138
https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12669146
 

felldude

Active Member
Aug 26, 2017
572
1,695
Have you tried clip interrogator?
I can just barley run BLIP-2 by itself for natural language I found it far superior to BLIP.
I don't think I could run the clip interrogation with BLIP-2

Before they took down the Liaon-5B reverse image search, I used it to get exact prompting on images assuming they where used, or the closest trained source.

I wish the BLIP-2 gave a summary of the description terms like the WD14 tagger does.