[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
Grid done...finally, can't really find anywhere to host the 350mb full version and i've had to split the jpg one.
Think it should be possible to tell what's clearly not worked and what's at least partially done so. There does seem to be a pattern of what "material" that has a better success than others, and what models clearly doesn't.

The radiant vibes one has me rather concerned. I guess it illustrates what i mentioned in the previous post. All other models seem pretty consistent with following the age in prompt, so wtf happened there i have no idea, i guess the only upside is that it's faces and fully clothed, still, don't like it having happened.

I've included one image "in full" for prompt and "it didn't turn out all that bad". Let me know if there's any others that i should upload.

split (1).jpg
split (2).jpg
split (3).jpg


00276-99743560.png
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
Model looks interesting and Juggernaut sounds worth trying too if it really is a base model, there's more than enough merges ripping off others so can be nice to have something that might behave different
Indeed. What caught my attention and made me look at the Juggernaut is the girl's angle in the image above. Most of the models I love do struggle with that very angle. So, even if the model is so-so in every other aspect, at least it works as a dedicated tool for the rear shots.

Also, I went through Elgance -> Deliberate -> Clarity -> Zovya's Photoreal because each had an incremental improvement in some aspects that resonated with me. Say I like chonky milfs, so I saw incremental improvements. But your mileage might vary if you are into something else or if your ideas of CMs are different.

I will be trying out Juggernaut for the next few weeks, keeping fingers crossed it comes thru aright. Here testing Juggernaut with the OpenPose ControlNet, the results are pretty pleasing, except the woman aint blonde, her hair aint short, the pants are not jean shorts, and the top doesn't have underboob.

a_13356_.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Grid done...finally, can't really find anywhere to host the 350mb full version and i've had to split the jpg one.
Think it should be possible to tell what's clearly not worked and what's at least partially done so. There does seem to be a pattern of what "material" that has a better success than others, and what models clearly doesn't.

The radiant vibes one has me rather concerned. I guess it illustrates what i mentioned in the previous post. All other models seem pretty consistent with following the age in prompt, so wtf happened there i have no idea, i guess the only upside is that it's faces and fully clothed, still, don't like it having happened.

I've included one image "in full" for prompt and "it didn't turn out all that bad". Let me know if there's any others that i should upload.

View attachment 2785729
View attachment 2785730
View attachment 2785731


View attachment 2785732
Yeah what's up with that vibrant checkpoint?.. Creepy to think about, why it might give that result. You mentioned seeds. I have been curious about, if there is any rime or reason to wich seeds we use. If lets say a higher number has any relevance to the outcome etc. Or is it all just random? I have read people say this or that about it. Lower is better for cartoon or anime etc and higher is better for photo realism. I have no idea if it's just people imagining things or if there is something to it.
Awesome work on this huge test.:)(y)
 

me3

Member
Dec 31, 2016
316
708
Indeed. What caught my attention and made me look at the Juggernaut is the girl's angle in the image above. Most of the models I love do struggle with that very angle. So, even if the model is so-so in every other aspect, at least it works as a dedicated tool for the rear shots.

Also, I went through Elgance -> Deliberate -> Clarity -> Zovya's Photoreal because each had an incremental improvement in some aspects that resonated with me. Say I like chonky milfs, so I saw incremental improvements. But your mileage might vary if you are into something else or if your ideas of CMs are different.

I will be trying out Juggernaut for the next few weeks, keeping fingers crossed it comes thru aright. Here testing Juggernaut with the OpenPose ControlNet, the results are pretty pleasing, except the woman aint blonde, her hair aint short, the pants are not jean shorts, and the top doesn't have underboob.

View attachment 2785760
Is this the sort of thing you're after, tried to keep the grid at a fairly decent size for viewing pleasure...
xyz_grid.jpg

Images for prompt and ...

Title: Stalking successful
01049-3309730544.png

Title: Stalking failed, abort abort abort.....RUN!
00997-3345252563.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I have been getting a private lesson 1 on 1 with instructor Kendra..
00023-2715484978.png

Finally I got into controlnet and open pose after procrastinating for a long time. I just thought it looked busy or a bit involved so I was sticking to what I knew and focused on other aspects of SD. In the pursuit of generating widescreen images, I learned that it was probably controlnet and it's complimentary extensions that was the answer. I first learned the method of "outpainting", meaning first generating a normal upright portrait ratio image and then with SD upscale and the "resize fill" option selected, then "outpaint" the rest with controlnet inpaint. This did the trick but was hit and miss. It was difficult to get it to blend well with the original, you always get a seem between the two. I learned from Sebastian Kamph to then do a normal img2img generation. This will blend the two together and then you can uspcale it. During my research I came across a different method however, that excludes the need for any "outpainting". You will instead use the latent couple extension in txt2img. With it you can assign a part of the prompt to a specific region of the image.
If you want a normal 16:9 ratio widescreen image this division and settings (se example) has been working the best for me.

Latent couple.png

You will separate the prompt with "AND" for each region. I write all the light and image quality tags for the first region, the subject tags for the 2nd and the background and/or scenery for the 3rd.
Here's how a prompt can look like:

Widescreen prompt.png

If you are going to use a Lora like me you also need to use the extension "Composable Lora".
You can also assign the negative prompting in the same way to each region by separating with "AND", though it's not always necessary. Use the same value for "end at this step" as your sample steps.
You can move the subject within the image by changing the position value for the 2nd region, 0:0.7 for example.
This will shift it off center in the image. Then press "visualize" to apply the new setting.

Latent couple2.png

Set the resolution of the entire image in text2img, for example 960x540, write your prompt and separate the regions with "AND" , also the negative prompt in the same way if needed.
Select your sampler and steps and cfg etc like normal, Setup the Latent couple extension settings and Composable Lora then generate.
To take it even further you can also use open pose to control the pose of the subject and to bump up the quality you can either use hiresfix with the primary generation or SD upscale script in img2img.

Source tutorial:
 
Last edited:

FallingDown90

Member
Aug 24, 2018
123
38
Hi, sorry I'm not English and it's hard to find the answer if you've already written it.
What is the way to generate specific anime characters? (I use Stable Diffusion with LoRa)
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,570
3,767
I have been getting a private lesson 1 on 1 with instructor Kendra..
View attachment 2799037

Finally I got into controlnet and open pose after procrastinating for a long time. I just thought it looked busy or a bit involved so I was sticking to what I knew and focused on other aspects of SD. In the pursuit of generating widescreen images, I learned that it was probably controlnet and it's complimentary extensions that was the answer. I first learned the method of "outpainting", meaning first generating a normal upright portrait ratio image and then with SD upscale and the "resize fill" option selected, then "outpaint" the rest with controlnet inpaint. This did the trick but was hit and miss. It was difficult to get it to blend well with the original, you always get a seem between the two. I learned from Sebastian Kamph to then do a normal img2img generation. This will blend the two together and then you can uspcale it. During my research I came across a different method however that excludes the need for any "outpainting". You will instead use the latent couple extension in txt2img. With it you can assign a part of the prompt to a specific region of the image.
If you want a normal 16:9 ratio widescreen image this division and settings (se example) has been working the best for me.

View attachment 2799048

You will separate the prompt with "AND" for each region. I write all the light and image quality tags for the first region, the subject tags for the 2nd and the background and/or scenery for the 3rd.
Here's how a prompt can look like:

View attachment 2798990

If you are going to use a Lora like me you also need to use the extension "Composable Lora".
You can also assign the negative prompting in the same way to each region by separating with "AND", though it's not always necessary. Use the same value for "end at this step" as your sample steps.
You can move the subject within the image by changing the position value for the 2nd region, 0:0.7 for example.
This will shift it of center in the image. Then press "visualize" to apply the new setting.

View attachment 2799051

Set the resolution of the entire image in text2img, for example 960x540, write your prompt and separate the regions with "AND" , also the negative prompt in the same way if needed.
Select your sampler and steps and cfg etc like normal, Setup the Latent couple extension settings and Composable Lora then generate.
To take it even further you can also use open pose to control the pose of the subject and to bump up the quality you can either use hiresfix with the primary generation or SD upscale script in img2img.

Source tutorial:
Nice gym! Like, I kid you not, but in my experience generating a nice looking gym is by far the hardest.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
00012-2355586765.png

I were experimenting and learning the widescreen method as described above when I saw the posts by Seph and me3.
This inspired me to also work more with openpose and the editor. You don't need to use a 3d editor like or set it manually. There are ready made pose packs that you can find on civitai or use the awesome feature of the open pose editor "detect from image". Simply press "detect from image" and select a prepared image of your choosing from where ever, then fine adjust it if needed, then click "send to text2img". Now you can replicate that image with your prompt or change it into something else. Combine this with the widescreen method as described above and you have now achieved god like powers.
To get good result I recommend creating a widescreen image in an image editor like photoshop with the correct ratio and position the subject within this image, then use it for detection in open pose editor.
Detect From Image.png

You don't have permission to view the spoiler content. Log in or register now.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Hi, sorry I'm not English and it's hard to find the answer if you've already written it.
What is the way to generate specific anime characters? (I use Stable Diffusion with LoRa)
We need a lot more info that this in order to be able to help properly. In general you need to work with the prompt. Use name of the character, specify the style. Describe the scene of the image you want. Pick appropriate checkpoint model, you can find tons on civitai. Also look if there is already a Lora or Ti etc for this character. Use openpose for posing the character.
 
Last edited:
  • Like
Reactions: FallingDown90

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Nice gym! Like, I kid you not, but in my experience generating a nice looking gym is by far the hardest.
I'm playing with the idea to train a Lora for this. My Kendra Lora has a gym image in it, perhaps this is why I got a decent result for the simple tags that I used.
 
  • Like
Reactions: Sepheyer and DD3DD

FallingDown90

Member
Aug 24, 2018
123
38
I apologize for the inconvenience but I need some help.

I installed LoRa following the guide and then from "civitai" I downloaded the models that I inserted in the path "C:\stable-diffusion\stable-diffusion-webui\models\Lora" (obviously the. path was saved as indicated in the guide).

Additional network is enabled and set as in the prompts.

To create the image I only worked on txt2img

When I try to spawn a specific character (in this case Bea from pokemon) Stable Diffusion creates something different.

Where am I wrong? If you need more details or I need to post any settings, extensions or anything else, let me know.

PS- As I said, I'm not English so I apologize if you have already given an answer in the past, unfortunately I could not find it

You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:
  • Like
Reactions: Mr-Fox

fustylugss

New Member
Apr 14, 2021
3
7
72hrs of SD.
Poses:Controlnet,openpose
Only the 1536x3072 is tile rendered.
Please do suggest improvements, recommendations or feedback.

00066-2028377397.png
You don't have permission to view the spoiler content. Log in or register now.

00065-2028377397.png
You don't have permission to view the spoiler content. Log in or register now.

00000-1471273565.0.jpg
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

onyx

Member
Aug 6, 2016
128
220
I apologize for the inconvenience but I need some help.

I installed LoRa following the guide and then from "civitai" I downloaded the models that I inserted in the path "C:\stable-diffusion\stable-diffusion-webui\models\Lora" (obviously the. path was saved as indicated in the guide).

Additional network is enabled and set as in the prompts.

To create the image I only worked on txt2img

When I try to spawn a specific character (in this case Bea from pokemon) Stable Diffusion creates something different.

Where am I wrong? If you need more details or I need to post any settings, extensions or anything else, let me know.

PS- As I said, I'm not English so I apologize if you have already given an answer in the past, unfortunately I could not find it

You don't have permission to view the spoiler content. Log in or register now.
From what I gather the base model.ckpt isnt that great. Check out some of the other Checkpoints on civitai. If you look at the bottom of the Bea lora page you'll see a bunch of renders using that model. If you click on the picture a lot of times it will list which Checkpoint (model) they used. Find one you like and try rendering the image using that. The checkpoints go in models\stable-diffusion.

Also if the image doesnt list the information in civitai, you can also save a copy of the picture and upload it to the png info tab. That should also list the prompts/seed/model used.

1690343500916.png
 
Last edited:

FallingDown90

Member
Aug 24, 2018
123
38
From what I gather the base model.ckpt isnt that great. Check out some of the other Checkpoints on civitai. If you look at the bottom of the Bea lora page you'll see a bunch of renders using that model. If you click on the picture a lot of times it will list which Checkpoint (model) they used. Find one you like and try rendering the image using that. The checkpoints go in models\stable-diffusion.

Also if the image doesnt list the information in civitai, you can also save a copy of the picture and upload it to the png info tab. That should also list the prompts/seed/model used.

View attachment 2800732
Nothing... I keep trying but it seems that all models give me this error:
loaded: <All keys matched successfully> setting (or sd model) changed. new networks created.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I apologize for the inconvenience but I need some help.

I installed LoRa following the guide and then from "civitai" I downloaded the models that I inserted in the path "C:\stable-diffusion\stable-diffusion-webui\models\Lora" (obviously the. path was saved as indicated in the guide).

Additional network is enabled and set as in the prompts.

To create the image I only worked on txt2img

When I try to spawn a specific character (in this case Bea from pokemon) Stable Diffusion creates something different.

Where am I wrong? If you need more details or I need to post any settings, extensions or anything else, let me know.

PS- As I said, I'm not English so I apologize if you have already given an answer in the past, unfortunately I could not find it

You don't have permission to view the spoiler content. Log in or register now.
You are trying to do too many things at the same time. Focus on one "concept" at a time. For example a spread image.
Then do the masturbation image, then the feet image..
Onyx gave you good advice, listen to him. Try the checkpoint model on the pokemon Lora page. If you find an image you like on civitai most of the time the generation data is included and you can simply press the "copy" button and paste in a txt document and save. You will need to copy paste the prompt positive and negative manually and also copy the settings manually. That's how it works on civitai. Most of the time the image itself doesn't include this data. In this thread we post the png file with the parameters included. Simply go to PNG Info tab in SD and load the image and then click send to txt2img to try out the prompt and settings of an image.
You find your own generated images here:Stable-Diffusion\stable-diffusion-webui\outputs\txt2img-images. The images are sorted into folders with each date, when they were generated.
It's very helpful that you post screenshots in order to receive help, the png file of a specific image that you want help with would be great also. This way anyone helping can see the prompt and settings conveniently.
Btw most people here is not english speaking natives, no need to apologize for this..;) If you are unsure of the meaning of an expression or a technical word use google translator or ask us here and we are happy to answer.

Nothing... I keep trying but it seems that all models give me this error:
loaded: <All keys matched successfully> setting (or sd model) changed. new networks created.
This is not an error, it's simply confirming that the keys for the Lora matched and new networks created.
You don't need to use the additional networks extension for Lora's though. SD supports Lora's native.
Press "show/hide" extra networks button under "generate", go to the Lora tab and select the one you want and then set the weight. 1 is default, you can set it lower but I don't recommend going higher.

Extra Networks tab.png Lora tab.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
72hrs of SD.
Poses:Controlnet,openpose
Only the 1536x3072 is tile rendered.
Please do suggest improvements, recommendations or feedback.

View attachment 2800687
You don't have permission to view the spoiler content. Log in or register now.

View attachment 2800688
You don't have permission to view the spoiler content. Log in or register now.

View attachment 2800689
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
This is great. Welcome to the gang.:) 72 hours isn't much though, if you meant that you just started. It's a good start though. Continue to experiment, this is how you learn. Read guides and watch tutorials. On the first page in this thread you can find links to some of the guides. Some of the info might be old, incorrect or made redundant at this point though. You can also simply use the search bar for this thread and filter for members, use my name Mr-Fox or Sepheyer, devilKKW, Jimwalrus, Schlonborn, Dag00th, me3, Sharlotte. These are some of the most active members in this thread (though I might forget someone) and we have all made guides or posted tips. You can learn from all of us. and are some of the most popular and active posters on youtube. They might not be the most knowledgeable or most correct but they are great at introducing you to a new tool or technique and show you the basics.
 

FallingDown90

Member
Aug 24, 2018
123
38
You are trying to do too many things at the same time. Focus on one "concept" at a time. For example a spread image.
Then do the masturbation image, then the feet image..
Onyx gave you good advice, listen to him. Try the checkpoint model on the pokemon Lora page. If you find an image you like on civitai most of the time the generation data is included and you can simply press the "copy" button and paste in a txt document and save. You will need to copy paste the prompt positive and negative manually and also copy the settings manually. That's how it works on civitai. Most of the time the image itself doesn't include this data. In this thread we post the png file with the parameters included. Simply go to PNG Info tab in SD and load the image and then click send to txt2img to try out the prompt and settings of an image.
You find your own generated images here:Stable-Diffusion\stable-diffusion-webui\outputs\txt2img-images. The images are sorted into folders with each date, when they were generated.
It's very helpful that you post screenshots in order to receive help, the png file of a specific image that you want help with would be great also. This way anyone helping can see the prompt and settings conveniently.
Btw most people here is not english speaking natives, no need to apologize for this..;) If you are unsure of the meaning of an expression or a technical word use google translator or ask us here and we are happy to answer.


This is not an error, it's simply confirming that the keys for the Lora matched and new networks created.
You don't need to use the additional networks extension for Lora's though. SD supports Lora's native.
Press "show/hide" extra networks button under "generate", go to the Lora tab and select the one you want and then set the weight. 1 is default, you can set it lower but I don't recommend going higher.

View attachment 2801304 View attachment 2801307
Thank you very much, you are very kind.
I have one last question for the moment (I hope).
When you say to focus on one concept at a time, do you mean that I have to reorder the prompt by categories (eg: quality / appearance / action and pose / body part / related body part), or I have to generate an image first and then update it with new prompt? (in this last case how should I do?)
 
  • Like
Reactions: Mr-Fox