[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
My apologies if this has been covered already (I did a search but could not find it). I started messing around with latent couple.
So first installed the two extensions as per the below ( and make sure to select the , there's another one but it does not have the ability to add a sketch).
View attachment 2777548

Once done, you may need to restart the gui.
In my case, I just used paint and created a frame like this View attachment 2777553 making sure that each 'square' (you can use whatever you want) is coloured in different colours (important for the latent couple to identify the different sections later).
Once that's done, in your text to image, enable both composable lora and latent couple and upload your sketch like so: View attachment 2777552

Once done, click on 'i've finished my sketch' and you will see an area for the general prompt and for each sub-prompt relating to each coloured section you defined in your sketch. Fill them all in with the required info as per the below: View attachment 2777551

and once done, click on 'prompt info update'.
This will populate your positive prompt and you're ready to go.
View attachment 2777572
I generated the below, using hi-res (png contains generative info as usual). It took a while (less than 30 secs without hi-res, close to 40 minutes with hires (...)) but that maybe my prompts and selections (i'll test more to check that).

on the below (hires) I may have wanted to add some negative prompts (forgot as I was excited to try it out....)
View attachment 2777573

Below without hires:
View attachment 2777574 View attachment 2777575

Sorry for the long post but wanted to share ;)
I found a very interesting video about this. It's not so much a tutorial imo but more a proof of concept. The guy is mumbling a bit too much and it's almost impossible to see the setting on his screen. It's very interesting nonetheless.
 
  • Like
Reactions: Sepheyer

Synalon

Member
Jan 31, 2022
225
663
Fantasy Painting 1.jpg
I also managed to get a widescreen image following Daggoths' instructions, the waterfall on the left didn't come out as well as I wanted but its a start. I'm trying to get a mix between a painting and photo, I kind of like how this looks.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
View attachment 2778864
I also managed to get a widescreen image following Daggoths' instructions, the waterfall on the left didn't come out as well as I wanted but its a start. I'm trying to get a mix between a painting and photo, I kind of like how this looks.
Try switching to "my prompt is more important", see wich of the 3 is best. Play with "denoising strength". Start with 1.0 and lower it in 0.05 increments. don't go lower than 0.8 . Play with "Control Weight" and "Ending control step" .
Don't forget to do a normal img2img generation to smooth out the "seems" . Even in this stage it's a good idea to play around with denoising strength, 0.4-0.6 is recommended but don't let that stop you from trying values outside this.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Sebastian Kamph Tutorial, new controlnet 1.1:
(not gun related but it's a start).
I'm thinking that if you can pose the subject and add a gun in the prompt hopefully SD can connect them together in the image
There is also things like regional prompting and latent couple, maybe this is necessary for giving SD some help. It is extensions that tells SD where in the image the prompt is relevant. So potentially you could place the gun in the hand of the subject this way.
On civitai there are ready made poses for the openpose editor. This is also something to look into. I would search for action poses.
There's also depthmaps I belive it's called that can be used. It's like those coloring books that has the outlines in black and you then fill in the color. It means that you can make a "sketch" that you will then use with controlnet. Sebastian Kamph has used this technique I believe in his tutorials about controlnet. Aitrepeneur has similar videos.

Example:
1689642750935.png 1689642865915.png
 
Last edited:
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I followed the instructions from this video about controlnet 1.1 by Sebastian Kamph

I created a quick pose with and used it with controlnet openpose.
This is only an example or proof of concept:
00080-394538661.png
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Just something i remembered i thought might be worth pointing out that it might not be logical/obvious.
It seems that when you use the words "realistic" or variations of it, the AI doesn't interpret it the same way we might be.
In most cases the it's applied in the sense you'd have a "fake" image/scene from something (basically cgi, drawn, painted etc) that's meant to have a sense of a "real" feel to it. However it's NOT the detail/quality "likeness" of what a photograph would be.
So if you're trying to make something that's meant to be a photo and/or things of that nature with color depth, quality, detail, etc, you should probably NOT include "realistic" in your positive prompt, it might be better off in negative.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Just something i remembered i thought might be worth pointing out that it might not be logical/obvious.
It seems that when you use the words "realistic" or variations of it, the AI doesn't interpret it the same way we might be.
In most cases the it's applied in the sense you'd have a "fake" image/scene from something (basically cgi, drawn, painted etc) that's meant to have a sense of a "real" feel to it. However it's NOT the detail/quality "likeness" of what a photograph would be.
So if you're trying to make something that's meant to be a photo and/or things of that nature with color depth, quality, detail, etc, you should probably NOT include "realistic" in your positive prompt, it might be better off in negative.
Yes, you're spot on. I have also noticed this. In order to get something lifelike or based in reality use phrases like photography and specify what type it is. For example glamour photo or artistic photography, professional photo etc. Time of day and light conditions etc.
Also use camera specs and descriptive terms used in photography. Terms describing composition etc. If you use descriptive terms used in rendering or videogame engines, you will get visuals more towards 3d, CGI or rendering. The same goes for animation and cartoons etc. If this is what you want, use appropriate terms.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
Yes, you're spot on. I have also noticed this. In order to get something lifelike or based in reality use phrases like photography and specify what type it is. For example glamour photo or artistic photography, professional photo etc. Time of day and light conditions etc.
Also use camera specs and descriptive terms used in photography. Terms describing composition etc. If you use descriptive terms used in rendering or videogame engines, you will get visuals more towards 3d, CGI or rendering. The same goes for animation and cartoons etc. If this is what you want, use appropriate terms.
Let's do an empirical test. Left one has "ultrarealistic", bunch camera terms, DoF in it, the right one doesn't. Same seeds, prompts otherwise. I'd say they they look like they are from the same batch - it is entirely conceivable. I just haven't ran the right image's prompt a statistically significant number of times, but I know very well that it does fall within what I would get with the left prompt. So, that's a field test here.

a_12867_.png a_13138_.png

You don't have permission to view the spoiler content. Log in or register now.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Let's do an empirical test. Left one has "ultrarealistic", bunch camera terms, DoF in it, the right one doesn't. Same seeds, prompts otherwise. I'd say they they look like they are from the same batch - it is entirely conceivable. I just haven't ran the right image's prompt a statistically significant number of times, but I know very well that it does fall within what I would get with the left prompt. So, that's a field test here.

View attachment 2782721 View attachment 2782751

You don't have permission to view the spoiler content. Log in or register now.
I agree with you. How the model has been trained dictates the rest. The prompt is still the most powerful tool we have but it can't do what the model has not been trained to do. Mine and me3's speculations and generalizations are still valid, but it depends on the model being used and how it has been trained and thus how it responds to the prompt. Using terms like ultra realistic might not give you photo quality, depending on the model it might mean realistic render instead.
Conclusion: no1 chekpoint model, no2 prompt in relation to the model, no3 extensions in relation to the model and prompt. Don't throw everything including the kitchen sink at a the prompt, it might be that you need to switch model first. Now that you have the appropriate model, see how it responds to simple descriptive phrases and proceed accordingly. If the appropriate prompt with the appropriate model doesn't finish the job add extensions and Lora's or Ti's . This Ai text 2 image render thing is still an experimental working prototype and will probably remain experimental for a long time. Since there are so many different people and teams working on it and tools and extensions it's a miracle that it works as well as it does. There will be bugs and "teething issues" because of it. We just have to hold on to the railing and try to roll with it.
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
I agree with you. How the model has been trained dictates the rest. The prompt is still the most powerful tool we have but it can't do what the model has not been trained to do. Mine and me3's speculations and generalizations are still valid, but it depends on the model being used and how it has been trained and thus how it responds to the prompt. Using terms like ultra realistic might not give you photo quality, depending on the model it might mean realistic render instead.
Conclusion: no1 chekpoint model, no2 prompt in relation to the model, no3 extensions in relation to the model and prompt. Don't throw everything including the kitchen sink at a the prompt, it might be that you need to switch model first. Now that you have the appropriate model, see how it responds to simple descriptive phrases and proceed accordingly. If the appropriate prompt with the appropriate model doesn't finish the job add extensions and Lora's or Ti's . This Ai text 2 image render thing is still an experimental working prototype and will probably remain experimental for a long time. Since there are so many different people and teams working on it and tools and extensions it's a miracle that it works as well as it does. There will be bugs and "teething issues" because of it. We just have to hold on to the railing and try to roll with it.
Having said all that about this doesnt matter and that doesn't matter, I keep including "ultrarealistic" and "200mm lens" into my prompts as a lucky charm :)

Probably, even when not understood by SD, this has an effect of a unique look that a certain particular string, a unique ID that say only your images will have. Kinda the same as signature ID of "MrFoxAwesomeAIArtiste" would give you. Anyways, back to the 200mm lens:

a_13179_.png
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

me3

Member
Dec 31, 2016
316
708
models are meant to understand lens "stuff" as that is the kind of things images often come with in the data itself, so should be a easy thing to extract and include in learning. If it understands it correctly is a totally different question though

Has anyone found or figured out any way to create a known "shape/construct", but have it be made up of a different "material" than it normal would be. IE like a human but it's made of an element, like the Human torch, Iceman or the Emma Frosts diamond form.
Or even more fantasy base things like elemental Atronachs or animals
 
  • Like
Reactions: devilkkw and Mr-Fox

me3

Member
Dec 31, 2016
316
708
i tried to include a better version of the grid but after cutting it into sections, compressing it down to low quality jpg and still couldn't upload it, i figured the "low quality generated" grid couldn't be all that much worse.

Based upon this "test" i'd say the "word" seems to have much the same effect as a lot of other random words seems to have in many cases, it changes the generation simply because there's another word in there.
Second thing i noticed is that either my sd1.5 is really screwed up or it's struggling badly...
(if anyone want any of the specific images uploaded let me know, but with the prompt it should be an easy thing to generate)
xyz_grid-0003-2856367958.jpg

You don't have permission to view the spoiler content. Log in or register now.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
models are meant to understand lens "stuff" as that is the kind of things images often come with in the data itself, so should be a easy thing to extract and include in learning. If it understands it correctly is a totally different question though

Has anyone found or figured out any way to create a known "shape/construct", but have it be made up of a different "material" than it normal would be. IE like a human but it's made of an element, like the Human torch, Iceman or the Emma Frosts diamond form.
Or even more fantasy base things like elemental Atronachs or animals
There are both checkpoints and Lora's, Ti's etc that does this. Checkpoints for furies, statues of various materials, fantasy stuff for fire and ice etc. Ai doesn't "understand" anything. There is no intelligence here, the term "AI" is overused. This is more akin to machine learning and logarithms. The training consists of images with a prompt or description of each images and the engine "learns" to associate the images with it's corresponding prompt or description. If a keyword or phrase is used with each prompt because there is a consistent theme, this word or prompt becomes a trigger. For example my Lora has the unintended trigger word "headband". The reason is that in the Daz3d renders I used for the training, the subject has a headband in almost all images. All prompts also has the subjects name "kendra" so this is also a "trigger word" ofc. I think that most fantasy based checkpoint models can do what you are talking about to an extent. If the result is good or not is another question.
If you are after something specific and a particular look I think you would need to do some training of your own. Would this be the case we will help anyway we can. I would start by watching the basic videos by Sebastian Kamph and Aitrepeneur and then go to the Lora training renpy I have linked to many times and read up. Even if you are going to create a Ti, because it has so much good info on the prep work not only the training.
 

devilkkw

Member
Mar 17, 2021
323
1,093
models are meant to understand lens "stuff" as that is the kind of things images often come with in the data itself, so should be a easy thing to extract and include in learning. If it understands it correctly is a totally different question though

Has anyone found or figured out any way to create a known "shape/construct", but have it be made up of a different "material" than it normal would be. IE like a human but it's made of an element, like the Human torch, Iceman or the Emma Frosts diamond form.
Or even more fantasy base things like elemental Atronachs or animals
Did you mean something like this?
You don't have permission to view the spoiler content. Log in or register now.

if yes, this is prompt i used:
You don't have permission to view the spoiler content. Log in or register now.

no negative.

#mat#= material you want
(check image in png info)

Why no negative?
Sometimes negative waste some concept, specially on high fantasy concept image.
A good way (there's also a way on how i test checkpoint) is start with simple prompt and check how good is checkpoint you are using. Just change cfg and see how it change. Found a good cfg that allow you having good result without waste your promt, then start to implement it, see what terms put out of concept your promt.

For example i read a post about "realism". if you use term's like "photorealistic", "ultra realistic" you can't reach "realism" because those terms are associated to a rendering engine. Change it to "photo" or "photography" give better result on "realism".
This is what i've understand in many try, maybe correct me. Also i'm speaking about using only prompt without any negative,TI or LORA.
In same way negative prompt acting on image and some terms push out your concept.
So i think a better way is starting by really simple prompt and adding terms step by step.

Also understanding if checkpoint you are using is good or not for what you want, by simple test like this.
I know there are too many checkpoint, i speak about civitai, and download and try it is a long work.
I have a simply way to decide what model try: check sample image and cfg.
What i mean? is simply: if image have not data, i don't download the model.
And if i see good sample image but cfg is low, i don't try model. This is because most model giving good sample at cfg 4.5/6 give bad result at 11/20 cfg( chromatic aberration in most case).
This is my method, based on my experience. maybe i'm in fault about that.
 

me3

Member
Dec 31, 2016
316
708
In theory yes, at least in how we interpret that prompt. Unfortunately the AI doesn't really agree.
It's very much hit and miss, only a few "materials" work, and even then it's quite a bit of misses, the rest it mainly adds it to the background/scenario or uses it in some kind of "covering/clothing" way.

Since i'd already tested this, but were lacking a large "grid" for reference etc, i thought i'd make one,
So picked some "materials", about 15 models, and added age to the prompt to avoid the pitfall of not including certain negatives.
Unfortunately i won't post any of them, and it's not just because of the +300mb grid image being too large, but despite the age this seemed to make almost all models great every young faces, with the exception of wood/branches, where it got much older (probably because it has a wrinkly look). So that was 5h and >1000 images to instantly delete...
Running it again now with "older" prompt and negatives, hoping something can be shared. Already had to restart multiple times as some seeds seems to do weird shit
 

me3

Member
Dec 31, 2016
316
708
Stealing ppl's sht:

View attachment 2785561

Credit:

Oh, so there is this model I never heard of: . Interesting.
Model looks interesting and Juggernaut sounds worth trying too if it really is a base model, there's more than enough merges ripping off others so can be nice to have something that might behave different