[Stable Diffusion] Prompt Sharing and Learning Thread

Dec 12, 2021
92
43
There is a hope to make consistent images where you first model them in DAZ / HS2 / Other and then run image-2-image with AI to add gloss as well as good old photoshop. I think the consensus so far is that no AI-only method can give you true consistency, although some combinations of methods can get you reasonably close.

Here is a start of one such discussion about "converting" DAZ girls into any style you want: https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-12343859
I've never used those two so I'll try to find some alternative solution for now, thank you anyways :)
 

me3

Member
Dec 31, 2016
316
708
Having been posting a lot of prompt/workflow-less images/clips lately i thought i'd do a cleaned up one to maybe give ppl ideas on how to do something. While this is from comfyui the image itself should make it possible to see what would be needed to set it up in a1111, i hope.
This is basically the same setup as i used to create the image in this post, and it can be used to layer or create "depth" in images. It obviously depends on what image you put to each layer and how they are handled/applied. I used a bunch of math and composite nodes to create the colored masking image, this can obviously be done other ways and in other shapes etc. IE if you put each "color/mask" next to each left to right, you could use it for panoramic views or multiple characters, a bit like regional prompts but with "image inputs".

This is not a very tidy/nice flow, i tried to keep things compact for the "screenshot" while still readable. Ipadapter nodes are colored by mask channel for visual aid. I've never installed nodes through anything else than the manager so any missing ones should be found there. I've included the mask image if anyone should want that.

workflow.png mask_0001.png

Edit: Adding an image using the workflow, just dropping 3 image i had into it.
_gmtmt_00022_.jpg
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Before stable diffusion - I always wanted my 3D Models to have a painted / artistic aesthetic to them. As if hand drawn.
I would of expected a solution to this be available through custom shaders but I was always disappointed with the results.

AI + Rotoscoping feels like the more likely technology to get there. Imagine extremely detailed hand draw art, animated and rendered at the speed of 3D - if that can be achieved it's almost the best of both worlds.

--

Before SD this was impossible.

Animation in old Disney movies always had extremely detailed backgrounds then simple flat shaded characters / animation because they had to draw them frame-by-frame. If a single frame takes an artist 3 weeks, you can't possibly achieve 24 frames per second and the likelihood of consistency falls dramatically as well.

This would be something AI could do (hopefully) that is essentially impossible today.
You mentioning hand drawn stuff reminded me of some images i got quite a while ago. I was running prompts of descriptions of fictional places, and for some reason this is the "style" and look the AI decided to give when using various descriptions of Rivendell. While maybe not hand drawn it reminds me of old book illustrations and that kind of thing.

_00031_.jpg _00025_.jpg
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,555
3,674
You mentioning hand drawn stuff reminded me of some images i got quite a while ago. I was running prompts of descriptions of fictional places, and for some reason this is the "style" and look the AI decided to give when using various descriptions of Rivendell. While maybe not hand drawn it reminds me of old book illustrations and that kind of thing.

View attachment 3168298 View attachment 3168299
I love hand drawn stuff too.

These are Doomer Boomer checkpoint which is hot AF:


And I think here is what the checkpoint was actually meant for:

 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,555
3,674
Before stable diffusion - I always wanted my 3D Models to have a painted / artistic aesthetic to them. As if hand drawn.
I would of expected a solution to this be available through custom shaders but I was always disappointed with the results.

AI + Rotoscoping feels like the more likely technology to get there. Imagine extremely detailed hand draw art, animated and rendered at the speed of 3D - if that can be achieved it's almost the best of both worlds.

--

Before SD this was impossible.

Animation in old Disney movies always had extremely detailed backgrounds then simple flat shaded characters / animation because they had to draw them frame-by-frame. If a single frame takes an artist 3 weeks, you can't possibly achieve 24 frames per second and the likelihood of consistency falls dramatically as well.

This would be something AI could do (hopefully) that is essentially impossible today.
I remember the negative shock I had when "they" continued Tank Police anime in a 3D-ish format instead of a traditional anime look. I went FML my life is over despite still being in pre-school.

Indeed, hoping for a rebirth of the esthetics since hand drawn look starts looking really easy to knock out:
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,555
3,674
- Can you do a romantic pose?
- Sure.
- Is that... Luke Skywalker on Tatooine?
- That's right!

a_03429_.png

Control-net'ing scenes from movies is fun. Here the very same scene but with an elf:

a_03307_.png
 
Dec 12, 2021
92
43
Can someone explain prompts to me?

1702514617569.png

Why can't I get this to look how I want to?
I want there to be a hand grabbing her ass while she's looking at it embarrassed. I don't understand why she's half naked when I specified against it. Here are my prompts.

<lora:uraraka-10:1.15> <lora:grabbing_anothers_ass_v.2.3:1.15> grabbing another's ass, anime (style:1.2), (anime:1.3)

male hand grabbing woman's ass
the woman is wearing a (white_shirt:1.1), (pantyhose_over_panties:1.3), black_panties, dark blue skirt, brown pantythose, (fully_clothed:1.5)
the woman has brown eyes, brown hair, lips, medium breasts, fit ass, (clothes:1.2),
the woman looks surprised, embarrased, (looking_at_hand:1.2)
office background, desks, computers

Negatives: (badhandv4:1.2) (easynegative:1.2) , visible man, (nude:1.2), bad anatomy, naked_ass, naked, naked_butt

Can someone please explain this to me, Im going insane lmao
 
Dec 12, 2021
92
43
Can someone explain prompts to me?

View attachment 3171260

Why can't I get this to look how I want to?
I want there to be a hand grabbing her ass while she's looking at it embarrassed. I don't understand why she's half naked when I specified against it. Here are my prompts.

<lora:uraraka-10:1.15> <lora:grabbing_anothers_ass_v.2.3:1.15> grabbing another's ass, anime (style:1.2), (anime:1.3)

male hand grabbing woman's ass
the woman is wearing a (white_shirt:1.1), (pantyhose_over_panties:1.3), black_panties, dark blue skirt, brown pantythose, (fully_clothed:1.5)
the woman has brown eyes, brown hair, lips, medium breasts, fit ass, (clothes:1.2),
the woman looks surprised, embarrased, (looking_at_hand:1.2)
office background, desks, computers

Negatives: (badhandv4:1.2) (easynegative:1.2) , visible man, (nude:1.2), bad anatomy, naked_ass, naked, naked_butt

Can someone please explain this to me, Im going insane lmao
Is it because the LORA's are clashing with each other?
 

namhoang909

Newbie
Apr 22, 2017
87
48
This is my sd ui
1702532538319.png
this is ui of a guy on Youtube, his video was recorded 7 months ago, why don't I have those symbol under generate button? I really need that save prompt button :unsure:
1702532594904.png
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,799
Is it because the LORA's are clashing with each other?
Stable diffusion can only do what the models have been trained to do. For something this specific, either you need to train your own lora or you need to find a lora someone else has trained. This is the point of sites like civitai or tensor.art . You can always experiment with controlnet and see what you can get from it. Maybe you can faint or fake it this way. Perhaps an approximation will be close enough and you can do the rest with dialoge.
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Using a similar ipdapter setup to the one i posted here and using the split head image used to create this clip can create some interesting effects.
Depending on which "layer" you put the image and different weights etc you can create "twins" where one reflects more the inner split image and the other takes the outer skin. Or you can create more of a "two-face" where its face is an half of each.

Some poor examples from me just testing, i've noticed i got a horrible lack of background image options so probably need to spend some time creating images more tailored for each "layer".

_ComfyUI_temp_okygp_00001_.jpg _ComfyUI_temp_okygp_00022_.jpg

More images in the thumbnails to not make the post a scrolling nightmare.
_ComfyUI_temp_okygp_00001_.jpg _ComfyUI_temp_okygp_00002_.jpg _ComfyUI_temp_okygp_00022_.jpg _ComfyUI_temp_okygp_00035_.jpg _ComfyUI_temp_okygp_00041_.jpg
 
Last edited:
  • Like
Reactions: VanMortis

DreamingAway

Member
Aug 24, 2022
246
629
Use this button here to save prompts/styles

View attachment 3172218

It's way easier to just save your images with metadata included then use them as prompt lookups. There are extensions that let you build entire prompt libraries with image previews.

Generating images make for a much better prompt library then that drop down. IMO.

--

If you wanna quickly swap between saving meta data and purging it you can add the "Save text information about generation parameters as chunks to png files" to your main page and click it off and on between generations to quickly toggle meta data on and off.

(It's element name is "enable_pnginfo" )

--

In case it's not obvious you can copy any image into the PNG tab and then hit "Send to txt2img" to quickly load identical prompt / settings off an image.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,555
3,674
The Rodent bro dropped a video showcasing a denoiser node for ComfyUI:

I tried - the thing is rather inconsistent. But when it works it fucking rocks:
workflow (3).png
 

me3

Member
Dec 31, 2016
316
708
The Rodent bro dropped a video showcasing a denoiser node for ComfyUI:

I tried - the thing is rather inconsistent. But when it works it fucking rocks:
View attachment 3184444
i tried "unsampling" to try and get consistency when making animations, i think i briefly mentioned planing to try it in a earlier post, but as you said, it can be a bit of a hit and miss. Specially if you're doing hundreds of images and can't really tweak things for each one. It's worth looking into for ppl though, definitely has its uses.
There's a controlnet-lllite model (kohyas "controlnet" version) that might be worth looking at if you're using images as "input" too, it's a bit misleadingly called blur. Seen some very good results on it, even with multiple passes on extremely "blurred" images.

Has anyone tried hypertile?
So far in my limited testing it seems to fair rather badly at what you'd normal think of with tiling. With a 512x512 tile it used more vram per tile than it took to make the 1024x1024 image...