[Stable Diffusion] Prompt Sharing and Learning Thread

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
You mentioning hand drawn stuff reminded me of some images i got quite a while ago. I was running prompts of descriptions of fictional places, and for some reason this is the "style" and look the AI decided to give when using various descriptions of Rivendell. While maybe not hand drawn it reminds me of old book illustrations and that kind of thing.

View attachment 3168298 View attachment 3168299
I love hand drawn stuff too.

These are Doomer Boomer checkpoint which is hot AF:


And I think here is what the checkpoint was actually meant for:

 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
Before stable diffusion - I always wanted my 3D Models to have a painted / artistic aesthetic to them. As if hand drawn.
I would of expected a solution to this be available through custom shaders but I was always disappointed with the results.

AI + Rotoscoping feels like the more likely technology to get there. Imagine extremely detailed hand draw art, animated and rendered at the speed of 3D - if that can be achieved it's almost the best of both worlds.

--

Before SD this was impossible.

Animation in old Disney movies always had extremely detailed backgrounds then simple flat shaded characters / animation because they had to draw them frame-by-frame. If a single frame takes an artist 3 weeks, you can't possibly achieve 24 frames per second and the likelihood of consistency falls dramatically as well.

This would be something AI could do (hopefully) that is essentially impossible today.
I remember the negative shock I had when "they" continued Tank Police anime in a 3D-ish format instead of a traditional anime look. I went FML my life is over despite still being in pre-school.

Indeed, hoping for a rebirth of the esthetics since hand drawn look starts looking really easy to knock out:
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
- Can you do a romantic pose?
- Sure.
- Is that... Luke Skywalker on Tatooine?
- That's right!

a_03429_.png

Control-net'ing scenes from movies is fun. Here the very same scene but with an elf:

a_03307_.png
 
Dec 12, 2021
108
58
Can someone explain prompts to me?

1702514617569.png

Why can't I get this to look how I want to?
I want there to be a hand grabbing her ass while she's looking at it embarrassed. I don't understand why she's half naked when I specified against it. Here are my prompts.

<lora:uraraka-10:1.15> <lora:grabbing_anothers_ass_v.2.3:1.15> grabbing another's ass, anime (style:1.2), (anime:1.3)

male hand grabbing woman's ass
the woman is wearing a (white_shirt:1.1), (pantyhose_over_panties:1.3), black_panties, dark blue skirt, brown pantythose, (fully_clothed:1.5)
the woman has brown eyes, brown hair, lips, medium breasts, fit ass, (clothes:1.2),
the woman looks surprised, embarrased, (looking_at_hand:1.2)
office background, desks, computers

Negatives: (badhandv4:1.2) (easynegative:1.2) , visible man, (nude:1.2), bad anatomy, naked_ass, naked, naked_butt

Can someone please explain this to me, Im going insane lmao
 
Dec 12, 2021
108
58
Can someone explain prompts to me?

View attachment 3171260

Why can't I get this to look how I want to?
I want there to be a hand grabbing her ass while she's looking at it embarrassed. I don't understand why she's half naked when I specified against it. Here are my prompts.

<lora:uraraka-10:1.15> <lora:grabbing_anothers_ass_v.2.3:1.15> grabbing another's ass, anime (style:1.2), (anime:1.3)

male hand grabbing woman's ass
the woman is wearing a (white_shirt:1.1), (pantyhose_over_panties:1.3), black_panties, dark blue skirt, brown pantythose, (fully_clothed:1.5)
the woman has brown eyes, brown hair, lips, medium breasts, fit ass, (clothes:1.2),
the woman looks surprised, embarrased, (looking_at_hand:1.2)
office background, desks, computers

Negatives: (badhandv4:1.2) (easynegative:1.2) , visible man, (nude:1.2), bad anatomy, naked_ass, naked, naked_butt

Can someone please explain this to me, Im going insane lmao
Is it because the LORA's are clashing with each other?
 

namhoang909

Newbie
Apr 22, 2017
87
48
This is my sd ui
1702532538319.png
this is ui of a guy on Youtube, his video was recorded 7 months ago, why don't I have those symbol under generate button? I really need that save prompt button :unsure:
1702532594904.png
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Is it because the LORA's are clashing with each other?
Stable diffusion can only do what the models have been trained to do. For something this specific, either you need to train your own lora or you need to find a lora someone else has trained. This is the point of sites like civitai or tensor.art . You can always experiment with controlnet and see what you can get from it. Maybe you can faint or fake it this way. Perhaps an approximation will be close enough and you can do the rest with dialoge.
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Using a similar ipdapter setup to the one i posted here and using the split head image used to create this clip can create some interesting effects.
Depending on which "layer" you put the image and different weights etc you can create "twins" where one reflects more the inner split image and the other takes the outer skin. Or you can create more of a "two-face" where its face is an half of each.

Some poor examples from me just testing, i've noticed i got a horrible lack of background image options so probably need to spend some time creating images more tailored for each "layer".

_ComfyUI_temp_okygp_00001_.jpg _ComfyUI_temp_okygp_00022_.jpg

More images in the thumbnails to not make the post a scrolling nightmare.
_ComfyUI_temp_okygp_00001_.jpg _ComfyUI_temp_okygp_00002_.jpg _ComfyUI_temp_okygp_00022_.jpg _ComfyUI_temp_okygp_00035_.jpg _ComfyUI_temp_okygp_00041_.jpg
 
Last edited:
  • Like
Reactions: VanMortis

DreamingAway

Member
Aug 24, 2022
248
640
Use this button here to save prompts/styles

View attachment 3172218

It's way easier to just save your images with metadata included then use them as prompt lookups. There are extensions that let you build entire prompt libraries with image previews.

Generating images make for a much better prompt library then that drop down. IMO.

--

If you wanna quickly swap between saving meta data and purging it you can add the "Save text information about generation parameters as chunks to png files" to your main page and click it off and on between generations to quickly toggle meta data on and off.

(It's element name is "enable_pnginfo" )

--

In case it's not obvious you can copy any image into the PNG tab and then hit "Send to txt2img" to quickly load identical prompt / settings off an image.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
The Rodent bro dropped a video showcasing a denoiser node for ComfyUI:

I tried - the thing is rather inconsistent. But when it works it fucking rocks:
workflow (3).png
 

me3

Member
Dec 31, 2016
316
708
The Rodent bro dropped a video showcasing a denoiser node for ComfyUI:

I tried - the thing is rather inconsistent. But when it works it fucking rocks:
View attachment 3184444
i tried "unsampling" to try and get consistency when making animations, i think i briefly mentioned planing to try it in a earlier post, but as you said, it can be a bit of a hit and miss. Specially if you're doing hundreds of images and can't really tweak things for each one. It's worth looking into for ppl though, definitely has its uses.
There's a controlnet-lllite model (kohyas "controlnet" version) that might be worth looking at if you're using images as "input" too, it's a bit misleadingly called blur. Seen some very good results on it, even with multiple passes on extremely "blurred" images.

Has anyone tried hypertile?
So far in my limited testing it seems to fair rather badly at what you'd normal think of with tiling. With a 512x512 tile it used more vram per tile than it took to make the 1024x1024 image...
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
i tried "unsampling" to try and get consistency when making animations, i think i briefly mentioned planing to try it in a earlier post, but as you said, it can be a bit of a hit and miss. Specially if you're doing hundreds of images and can't really tweak things for each one. It's worth looking into for ppl though, definitely has its uses.
There's a controlnet-lllite model (kohyas "controlnet" version) that might be worth looking at if you're using images as "input" too, it's a bit misleadingly called blur. Seen some very good results on it, even with multiple passes on extremely "blurred" images.

Has anyone tried hypertile?
So far in my limited testing it seems to fair rather badly at what you'd normal think of with tiling. With a 512x512 tile it used more vram per tile than it took to make the 1024x1024 image...
Can you post the hypertile workflow? There are a few things that are hypertile for me, so yea.
 

me3

Member
Dec 31, 2016
316
708
Can you post the hypertile workflow? There are a few things that are hypertile for me, so yea.
it's not much of a workflow thing really, it's just a simple node you put "between" model points, like with loras, i think it's a base node.

Should show up if you just start typing in the name in the node search
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,766
q_q

IDK.

If you come across a workflow, please ping me. I really have no idea how the devs mean to use the HT node. Like giving me a carrot for the snowman but forgetting to tell that it must be its nose - left to my own devices I'd stick the carrot elsewhere.

Here is what I get for merely plugging it in - there is a defect where there are black stripes on the face. So, yea, I wish there was a manual of how and for what this is intended to be used.
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

felldude

Active Member
Aug 26, 2017
572
1,693
So I tested the Adam 32 bit training, I mean I have had those cuda.dll's installed may as well try them out.
(Using libbitsandbytes_cuda118.dll)
Cosine with the exact same learning rate as the SD 1.5 model 4800 steps over 4 epochs.

Which do you think is BF16 and which is FP32 (Same seed and lora value)

Learning Concept is Tanlines

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

ComfyUI_00829_.jpg