[Stable Diffusion] Prompt Sharing and Learning Thread

Sepheyer

Well-Known Member
Dec 21, 2020
1,528
3,598
I have been getting a private lesson 1 on 1 with instructor Kendra..
View attachment 2799037

Finally I got into controlnet and open pose after procrastinating for a long time. I just thought it looked busy or a bit involved so I was sticking to what I knew and focused on other aspects of SD. In the pursuit of generating widescreen images, I learned that it was probably controlnet and it's complimentary extensions that was the answer. I first learned the method of "outpainting", meaning first generating a normal upright portrait ratio image and then with SD upscale and the "resize fill" option selected, then "outpaint" the rest with controlnet inpaint. This did the trick but was hit and miss. It was difficult to get it to blend well with the original, you always get a seem between the two. I learned from Sebastian Kamph to then do a normal img2img generation. This will blend the two together and then you can uspcale it. During my research I came across a different method however that excludes the need for any "outpainting". You will instead use the latent couple extension in txt2img. With it you can assign a part of the prompt to a specific region of the image.
If you want a normal 16:9 ratio widescreen image this division and settings (se example) has been working the best for me.

View attachment 2799048

You will separate the prompt with "AND" for each region. I write all the light and image quality tags for the first region, the subject tags for the 2nd and the background and/or scenery for the 3rd.
Here's how a prompt can look like:

View attachment 2798990

If you are going to use a Lora like me you also need to use the extension "Composable Lora".
You can also assign the negative prompting in the same way to each region by separating with "AND", though it's not always necessary. Use the same value for "end at this step" as your sample steps.
You can move the subject within the image by changing the position value for the 2nd region, 0:0.7 for example.
This will shift it of center in the image. Then press "visualize" to apply the new setting.

View attachment 2799051

Set the resolution of the entire image in text2img, for example 960x540, write your prompt and separate the regions with "AND" , also the negative prompt in the same way if needed.
Select your sampler and steps and cfg etc like normal, Setup the Latent couple extension settings and Composable Lora then generate.
To take it even further you can also use open pose to control the pose of the subject and to bump up the quality you can either use hiresfix with the primary generation or SD upscale script in img2img.

Source tutorial:
Nice gym! Like, I kid you not, but in my experience generating a nice looking gym is by far the hardest.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
00012-2355586765.png

I were experimenting and learning the widescreen method as described above when I saw the posts by Seph and me3.
This inspired me to also work more with openpose and the editor. You don't need to use a 3d editor like or set it manually. There are ready made pose packs that you can find on civitai or use the awesome feature of the open pose editor "detect from image". Simply press "detect from image" and select a prepared image of your choosing from where ever, then fine adjust it if needed, then click "send to text2img". Now you can replicate that image with your prompt or change it into something else. Combine this with the widescreen method as described above and you have now achieved god like powers.
To get good result I recommend creating a widescreen image in an image editor like photoshop with the correct ratio and position the subject within this image, then use it for detection in open pose editor.
Detect From Image.png

You don't have permission to view the spoiler content. Log in or register now.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Hi, sorry I'm not English and it's hard to find the answer if you've already written it.
What is the way to generate specific anime characters? (I use Stable Diffusion with LoRa)
We need a lot more info that this in order to be able to help properly. In general you need to work with the prompt. Use name of the character, specify the style. Describe the scene of the image you want. Pick appropriate checkpoint model, you can find tons on civitai. Also look if there is already a Lora or Ti etc for this character. Use openpose for posing the character.
 
Last edited:
  • Like
Reactions: FallingDown90

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Nice gym! Like, I kid you not, but in my experience generating a nice looking gym is by far the hardest.
I'm playing with the idea to train a Lora for this. My Kendra Lora has a gym image in it, perhaps this is why I got a decent result for the simple tags that I used.
 
  • Like
Reactions: Sepheyer and DD3DD

FallingDown90

Member
Aug 24, 2018
113
38
I apologize for the inconvenience but I need some help.

I installed LoRa following the guide and then from "civitai" I downloaded the models that I inserted in the path "C:\stable-diffusion\stable-diffusion-webui\models\Lora" (obviously the. path was saved as indicated in the guide).

Additional network is enabled and set as in the prompts.

To create the image I only worked on txt2img

When I try to spawn a specific character (in this case Bea from pokemon) Stable Diffusion creates something different.

Where am I wrong? If you need more details or I need to post any settings, extensions or anything else, let me know.

PS- As I said, I'm not English so I apologize if you have already given an answer in the past, unfortunately I could not find it

You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:
  • Like
Reactions: Mr-Fox

fustylugss

New Member
Apr 14, 2021
3
7
72hrs of SD.
Poses:Controlnet,openpose
Only the 1536x3072 is tile rendered.
Please do suggest improvements, recommendations or feedback.

00066-2028377397.png
You don't have permission to view the spoiler content. Log in or register now.

00065-2028377397.png
You don't have permission to view the spoiler content. Log in or register now.

00000-1471273565.0.jpg
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

onyx

Member
Aug 6, 2016
128
218
I apologize for the inconvenience but I need some help.

I installed LoRa following the guide and then from "civitai" I downloaded the models that I inserted in the path "C:\stable-diffusion\stable-diffusion-webui\models\Lora" (obviously the. path was saved as indicated in the guide).

Additional network is enabled and set as in the prompts.

To create the image I only worked on txt2img

When I try to spawn a specific character (in this case Bea from pokemon) Stable Diffusion creates something different.

Where am I wrong? If you need more details or I need to post any settings, extensions or anything else, let me know.

PS- As I said, I'm not English so I apologize if you have already given an answer in the past, unfortunately I could not find it

You don't have permission to view the spoiler content. Log in or register now.
From what I gather the base model.ckpt isnt that great. Check out some of the other Checkpoints on civitai. If you look at the bottom of the Bea lora page you'll see a bunch of renders using that model. If you click on the picture a lot of times it will list which Checkpoint (model) they used. Find one you like and try rendering the image using that. The checkpoints go in models\stable-diffusion.

Also if the image doesnt list the information in civitai, you can also save a copy of the picture and upload it to the png info tab. That should also list the prompts/seed/model used.

1690343500916.png
 
Last edited:

FallingDown90

Member
Aug 24, 2018
113
38
From what I gather the base model.ckpt isnt that great. Check out some of the other Checkpoints on civitai. If you look at the bottom of the Bea lora page you'll see a bunch of renders using that model. If you click on the picture a lot of times it will list which Checkpoint (model) they used. Find one you like and try rendering the image using that. The checkpoints go in models\stable-diffusion.

Also if the image doesnt list the information in civitai, you can also save a copy of the picture and upload it to the png info tab. That should also list the prompts/seed/model used.

View attachment 2800732
Nothing... I keep trying but it seems that all models give me this error:
loaded: <All keys matched successfully> setting (or sd model) changed. new networks created.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
I apologize for the inconvenience but I need some help.

I installed LoRa following the guide and then from "civitai" I downloaded the models that I inserted in the path "C:\stable-diffusion\stable-diffusion-webui\models\Lora" (obviously the. path was saved as indicated in the guide).

Additional network is enabled and set as in the prompts.

To create the image I only worked on txt2img

When I try to spawn a specific character (in this case Bea from pokemon) Stable Diffusion creates something different.

Where am I wrong? If you need more details or I need to post any settings, extensions or anything else, let me know.

PS- As I said, I'm not English so I apologize if you have already given an answer in the past, unfortunately I could not find it

You don't have permission to view the spoiler content. Log in or register now.
You are trying to do too many things at the same time. Focus on one "concept" at a time. For example a spread image.
Then do the masturbation image, then the feet image..
Onyx gave you good advice, listen to him. Try the checkpoint model on the pokemon Lora page. If you find an image you like on civitai most of the time the generation data is included and you can simply press the "copy" button and paste in a txt document and save. You will need to copy paste the prompt positive and negative manually and also copy the settings manually. That's how it works on civitai. Most of the time the image itself doesn't include this data. In this thread we post the png file with the parameters included. Simply go to PNG Info tab in SD and load the image and then click send to txt2img to try out the prompt and settings of an image.
You find your own generated images here:Stable-Diffusion\stable-diffusion-webui\outputs\txt2img-images. The images are sorted into folders with each date, when they were generated.
It's very helpful that you post screenshots in order to receive help, the png file of a specific image that you want help with would be great also. This way anyone helping can see the prompt and settings conveniently.
Btw most people here is not english speaking natives, no need to apologize for this..;) If you are unsure of the meaning of an expression or a technical word use google translator or ask us here and we are happy to answer.

Nothing... I keep trying but it seems that all models give me this error:
loaded: <All keys matched successfully> setting (or sd model) changed. new networks created.
This is not an error, it's simply confirming that the keys for the Lora matched and new networks created.
You don't need to use the additional networks extension for Lora's though. SD supports Lora's native.
Press "show/hide" extra networks button under "generate", go to the Lora tab and select the one you want and then set the weight. 1 is default, you can set it lower but I don't recommend going higher.

Extra Networks tab.png Lora tab.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
72hrs of SD.
Poses:Controlnet,openpose
Only the 1536x3072 is tile rendered.
Please do suggest improvements, recommendations or feedback.

View attachment 2800687
You don't have permission to view the spoiler content. Log in or register now.

View attachment 2800688
You don't have permission to view the spoiler content. Log in or register now.

View attachment 2800689
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
This is great. Welcome to the gang.:) 72 hours isn't much though, if you meant that you just started. It's a good start though. Continue to experiment, this is how you learn. Read guides and watch tutorials. On the first page in this thread you can find links to some of the guides. Some of the info might be old, incorrect or made redundant at this point though. You can also simply use the search bar for this thread and filter for members, use my name Mr-Fox or Sepheyer, devilKKW, Jimwalrus, Schlonborn, Dag00th, me3, Sharlotte. These are some of the most active members in this thread (though I might forget someone) and we have all made guides or posted tips. You can learn from all of us. and are some of the most popular and active posters on youtube. They might not be the most knowledgeable or most correct but they are great at introducing you to a new tool or technique and show you the basics.
 

FallingDown90

Member
Aug 24, 2018
113
38
You are trying to do too many things at the same time. Focus on one "concept" at a time. For example a spread image.
Then do the masturbation image, then the feet image..
Onyx gave you good advice, listen to him. Try the checkpoint model on the pokemon Lora page. If you find an image you like on civitai most of the time the generation data is included and you can simply press the "copy" button and paste in a txt document and save. You will need to copy paste the prompt positive and negative manually and also copy the settings manually. That's how it works on civitai. Most of the time the image itself doesn't include this data. In this thread we post the png file with the parameters included. Simply go to PNG Info tab in SD and load the image and then click send to txt2img to try out the prompt and settings of an image.
You find your own generated images here:Stable-Diffusion\stable-diffusion-webui\outputs\txt2img-images. The images are sorted into folders with each date, when they were generated.
It's very helpful that you post screenshots in order to receive help, the png file of a specific image that you want help with would be great also. This way anyone helping can see the prompt and settings conveniently.
Btw most people here is not english speaking natives, no need to apologize for this..;) If you are unsure of the meaning of an expression or a technical word use google translator or ask us here and we are happy to answer.


This is not an error, it's simply confirming that the keys for the Lora matched and new networks created.
You don't need to use the additional networks extension for Lora's though. SD supports Lora's native.
Press "show/hide" extra networks button under "generate", go to the Lora tab and select the one you want and then set the weight. 1 is default, you can set it lower but I don't recommend going higher.

View attachment 2801304 View attachment 2801307
Thank you very much, you are very kind.
I have one last question for the moment (I hope).
When you say to focus on one concept at a time, do you mean that I have to reorder the prompt by categories (eg: quality / appearance / action and pose / body part / related body part), or I have to generate an image first and then update it with new prompt? (in this last case how should I do?)
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Thank you very much, you are very kind.
I have one last question for the moment (I hope).
When you say to focus on one concept at a time, do you mean that I have to reorder the prompt by categories (eg: quality / appearance / action and pose / body part / related body part), or I have to generate an image first and then update it with new prompt? (in this last case how should I do?)
I meant focus on generating images with only one concept to begin with. After you get good result with one, go to the next concept and focus on this. After you have started to get some consistent results with one concept per image, you can start to try more than one concept per image. To learn and to see what effects what, you need to exclude as many variables as possible. Start with trying to get a good image of the legs spread, then when you are getting good results, go with the next concept. One concept is difficult enough for even an experienced SD user, no need to make it more difficult than it needs to be. Also read the advice I gave to fustylugss.
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
General tips for basic prompting. Keep it simple.. Avoid a ton of different tags, saying the same thing. It doesn't help anything.
Don't use too many additions like Lora's or Ti's. Getting one to work correctly is hard enough. Use simple clear phrases and descriptions. SD "understands" some technical language but most of the time assume it doesn't and describe it instead. It's not real AI, there is no intelligence here. No "singularity". This is machine learning (deep learning) and algorithms. SD is language to image generation and built on "machine learning" language models. In order for SD to "know" anything, someone must have trained it with an image and a complimentary prompt or description that it then "learn" to associate with the concept of this image.

General tips in general.. :D No1 get a decent checkpoint model that is good at the style that you wish to achieve, it can't do anything it hasn't been trained to do though. No2 the prompt is the most powerful tool we have, the negative prompt in particular. No3, if the checkpoint and prompt doesn't achieve the desired result now is the time to add Lora or Ti. First try with only checkpoint and prompt though.
 
Last edited:

FallingDown90

Member
Aug 24, 2018
113
38
You have been really clear and very helpful, thank you very much. Leaving aside the perverse part, with your advice I think I will be able to make the most of SD also to create references for my work. Let's hope so... in the meantime, if one day I can fix Bea I'll show you the results
 
  • Red Heart
  • Like
Reactions: Sepheyer and Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
You have been really clear and very helpful, thank you very much. Leaving aside the perverse part, with your advice I think I will be able to make the most of SD also to create references for my work. Let's hope so... in the meantime, if one day I can fix Bea I'll show you the results
You're welcome. :) Sounds great, I'm looking forward to it.(y)
 
  • Like
Reactions: Sepheyer

FallingDown90

Member
Aug 24, 2018
113
38
A stupid question... from the render previews it looks like it achieved the goal... but i think it crashed and didn't save the image... is there a way to figure out if. Am I wasting time waiting for something to move? Or is there something that. can I do to save what can be saved?
You don't have permission to view the spoiler content. Log in or register now.