[Stable Diffusion] Prompt Sharing and Learning Thread

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
And the NSFW restrictions that I face with e.g. midjourney or dalle 2 aren't present when using SD locally?

Edit: What's the difference between something like "Stable Diffusion v1-5 Model Card" and the models that I find on Civitai?

Is the first one more of an allrounder model and the models on Civitai are specific things only?
 
Last edited:

me3

Member
Dec 31, 2016
316
708
v1-5 is the base model so it's just a basic a bit of everything model, it can create very good images but models of civitai will generally be better as long as you choose one that's meant for what you want to create. IE pure anime models is unlikely to create good photo realistic images.

As for NSFW most models you find on that site probably creates it in some way, there are more specific ones for it too, often including some kind of reference to it in their name...IE porn...
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
v1-5 is the base model so it's just a basic a bit of everything model, it can create very good images but models of civitai will generally be better as long as you choose one that's meant for what you want to create. IE pure anime models is unlikely to create good photo realistic images.

As for NSFW most models you find on that site probably creates it in some way, there are more specific ones for it too, often including some kind of reference to it in their name...IE porn...
But can I just pick any random (character) model that I like and generate NSFW pictures with it? I'm just wondering where SD gets the "info" from. So let's say I have only installed the base modell v1-5 and then download a model of famous anime character X (not porn specifically).

Now I want to use that model to put that character nude into scenario Y. Does that simply work, and if so, where does SD then gather the information on how to create that picture - from the base model? Or do I need to get a somewhat porn-database-model for SD to be able to pull that off?


*Electricity bills aside - they shouldn't be too much unless you have a crazy multi-GPU set up
Is there a way to limit the GPU usage to be on the safe side, or is a single GPU setup not worth tweaking in terms of expenses?
 

me3

Member
Dec 31, 2016
316
708
The model/checkpoint is what contains most of the data/"knowledge". So if you want to make anything nude or more sexual oriented you need a model that has that knowledge. So in that case the base stable diffusion 1.5 isn't the best suited, it should work but not sure how well the end result would be.
When you're specifically talking about a "model of a anime character" you're probably thinking of something LoRA or an embedding. This are more like addons that provide specific instructions to the model. Sort of like a specialist coming in to guide the end result.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
The model/checkpoint is what contains most of the data/"knowledge". So if you want to make anything nude or more sexual oriented you need a model that has that knowledge. So in that case the base stable diffusion 1.5 isn't the best suited, it should work but not sure how well the end result would be.
When you're specifically talking about a "model of a anime character" you're probably thinking of something LoRA or an embedding. This are more like addons that provide specific instructions to the model. Sort of like a specialist coming in to guide the end result.
Okay but are models closed or can my stable diffusion just access all models I have installed?

E.g. I install a model specialized in nude stuff and then use the anime character X model for nude generations, does it then combine data?
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
Ok something is terribly off :WaitWhat:

So I downloaded this model:

It's installed in the "Lora" folder under models. I picked it in my SD UI and just went with a simple prompt to see the output. This is the result:

1696873332170.png

This doesn't even look remotely like the model :LOL:

Here are my settings. What did I do wrong?

1696873368395.png
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Ok something is terribly off :WaitWhat:

So I downloaded this model:

It's installed in the "Lora" folder under models. I picked it in my SD UI and just went with a simple prompt to see the output. This is the result:

View attachment 2992823

This doesn't even look remotely like the model :LOL:

Here are my settings. What did I do wrong?

View attachment 2992824


Edit: Or did it use the basic SD model? I think it might, because I switched the downloaded model from the model folder to the LORA folder.
But even with the same setting before, when it was in the model folder, and I specifically picked the downloaded model, I got weird results that didn't even remotely looked like anime.
is the lora called just "ina"? to easily use loras you can go into the lora tab and click it there. Also loras very often have trigger words or combination of words. If you look at the right side on the loras page on civitai you see it lists multiple trigger combinations, you need to use one of those in your prompt.

As a general warning be VERY careful about the word "girl" in your prompt. It has a high chance of giving you a underage character, which would be very bad in the case of NSFW images
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
is the lora called just "ina"? to easily use loras you can go into the lora tab and click it there. Also loras very often have trigger words or combination of words. If you look at the right side on the loras page on civitai you see it lists multiple trigger combinations, you need to use one of those in your prompt.
It's called "ninomae ina'nis 5 outfits.safetensors". Your reply did overlap with my edit, if you watch my post again now you can see my settings, I tried the exact suggested combination of words. That's the only file that's downloadable from that page. :unsure:

But shouldn't it work with any input, isn't that the main idea behind the whole thing, that we can create whatever we want?
 
Last edited:
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
It's called "ninomae ina'nis 5 outfits.safetensors". Your reply did overlap with my edit, if you watch my post again now you can see my settings, I tried the exact suggested combination of words. That's the only file that's downloadable from that page. :unsure:

But shouldn't it work with any input, isn't that the main idea behind the whole thing, that we can create whatever we want?
The LoRA you downloaded seem to be to create one character with different "outfits", so it seems to be very specific in what it wants to do.
You might be able to trigger just the character if you remove the outfit parts from the prompt, but i doubt it'll work fully.
There might be other LoRAs for the same character which is less restricted
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
The LoRA you downloaded seem to be to create one character with different "outfits", so it seems to be very specific in what it wants to do.
You might be able to trigger just the character if you remove the outfit parts from the prompt, but i doubt it'll work fully.
There might be other LoRAs for the same character which is less restricted
But I am not getting anything even remotely close. Not even with the full suggested copied prompt from the page. It looks rather like the SD base model than the downloaded model.

This is all I have to do, right?

1696876243118.png

(Taken from here: )

By the way I don't seem to have any "show extra networks" button under the Generate button:

1696876281380.png
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I somehow thought SD is Dalle2 and since they charge premium that there is no free version. Or is Dalle just another big model that gives you external rendering, making use of SD, therefore charge you?

I've got a GTX 1070, does it make sense to generate pictures with that or will it take an eternity to get pictures generated?

Thanks for all the info :)
I have a GTX 1070. You can check out my posts by searching my name to see what you can do with it. This card is enough for most ai generating, it's only if you go very advanced or want to train a SDXL Lora or something that this card will limit you.
 
  • Like
Reactions: Sepheyer

me3

Member
Dec 31, 2016
316
708
But I am not getting anything even remotely close. Not even with the full suggested copied prompt from the page. It looks rather like the SD base model than the downloaded model.

This is all I have to do, right?

View attachment 2992932

(Taken from here: )

By the way I don't seem to have any "show extra networks" button under the Generate button:

View attachment 2992934
Yes you put loras into that directory, HOWEVER you don't select that in the dropdown for models at the top left of the page.
That's models, the lora you are trying to use shouldn't even show up in that list unless you've put the file in the wrong place.
In that drop down select the 1-5 model which you previously had selected there.
Loras (rather small files from 20-120mb) are NOT models, they are (simply put) a very small part of a model, they do not work on their own, the add to models (large files generally 2-6GB)

Also that "guide" is wrong in at least one point. :1 should usually NOT be too high, if that needs to be lowered in the case of things like characters, that lora has been overtrained/overfitted, and as such is badly made.
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I would suggest that you start out much more simple to get familiar with using SD. Meaning only a checkpoint model and prompt. The main "skill" to learn is namely prompting. You need to get proficient in creating prompts to be able to create your images. First after you have become more familiar with SD and prompting, then it's time to explore Loras etc.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
I would suggest that you start out much more simple to get familiar with using SD. Meaning only a checkpoint model and prompt. The main "skill" to learn is namely prompting. You need to get proficient in creating prompts to be able to create your images. First after you have become more familiar with SD and prompting, then it's time to explore Loras etc.
I am good at prompting, I've been using MJ since it came out and get pretty high quality results with a lot of control over details. But that's only the prompting, my main issue now is getting SD to output somewhat usable pictures in specific styles.

The default SD style is pretty ugly, that's why I wanted to implement a specific model that aesthetically matches what I envision. But I can't get those models to run, no matter what I try to implement, I get only default SD results.
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
Look, I downloaded this one:

Put it into the Lora folder. Then in Stable Diffusion I pick it from the Lora subtab, it puts "<lora:hari:1>" into the prompt tab.

I add the whole trigger word thing (just for sake of demonstration):

"1girl, solo, brown hair, medium hair large breast, brown eyes,"

let's add: in front of a window, at night, interior.

1696879408591.png

Now that's what I get out of the generation:

1696879427902.png


Where's the whole style of the downloaded model? Isn't it supposed to look like this? It's not even night!

1696879453240.png
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Look, I downloaded this one:

Put it into the Lora folder. Then in Stable Diffusion I pick it from the Lora subtab, it puts "<lora:hari:1>" into the prompt tab.

I add the whole trigger word thing (just for sake of demonstration):

"1girl, solo, brown hair, medium hair large breast, brown eyes,"

let's add: in front of a window, at night, interior.

View attachment 2993078

Now that's what I get out of the generation:

View attachment 2993079


Where's the whole style of the downloaded model? Isn't it supposed to look like this? It's not even night!

View attachment 2993080
If you take a look at the generation data you can see that they use this checkpoint: AnyLoRA_noVae_fp16-pruned

They have also used a negative Lora easynegative and negative embeding deepnegative:



Embedings is placed in Stable-Diffusion\stable-diffusion-webui\embeddings.

They also generated this image with hiresfix and the upscaler 4x-Ultrasharp.

Place it in Stable-Diffusion\stable-diffusion-webui\models\ESRGAN
They are using clipskip 2, you can find this in settings.
You don't have permission to view the spoiler content. Log in or register now.

00013-4188456918.png 00014-1782139720.png
Enlarge the images in your browser and download. Go to the PNG Info tab in SD and load an image and press "send to txt2img". This will add all the data for you because the png files contain the generation data (prompt etc). Now you only need to make sure that you have all the models in the right folders then you can start generating. Don't forget to set the seed to -1 if you don't want the exact same image as me.
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
If you take a look at the generation data you can see that they use this checkpoint: AnyLoRA_noVae_fp16-pruned

They have also used a negative Lora easynegative and negative embeding deepnegative:

Thanks, I am right now downloading the checkpoint and then proceed with your manual!

Can you explain to me what the difference is between a checkpoint and a model? Checkpoints seems to be more like a basic database, and models are very specific modules?

And the negative embedding are just for excluding / specifing what one dpesn't want? In MJ one would only add a "-" before that desired prompt, like "-hands" would make MJ try to avoid showing hands.

Why are there two different embeddings used for negatives?

And I assume with the merge tool I can combine two checkpoints or models and get both to contribute to the generations?

Edit:

Did everything now except this:

Enlarge the images in your browser and download. Go to the PNG Info tab in SD and load an image and press "send to txt2img". This will add all the data for you because the png files contain the generation data (prompt etc). Now you only need to make sure that you have all the models in the right folders then you can start generating. Don't forget to set the seed to -1 if you don't want the exact same image as me.
I just took the generation data from the civitai site (the prompts, negative prompts, and brought the sampling steps up to 30. It's way better now but the style still looks different, any idea why? (I also activated the restore faces function as suggested on the 1st page of this thread).


1696885718740.png

This is what it looks like on my side:

1696885803337.png
 
Last edited:
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Thanks, I am right now downloading the checkpoint and then proceed with your manual!

Can you explain to me what the difference is between a checkpoint and a model? Checkpoints seems to be more like a basic database, and models are very specific modules?

And the negative embedding are just for excluding / specifing what one dpesn't want? In MJ one would only add a "-" before that desired prompt, like "-hands" would make MJ try to avoid showing hands.

Why are there two different embeddings used for negatives?

And I assume with the merge tool I can combine two checkpoints or models and get both to contribute to the generations?
Stable Diffusion is the Ai platform. Stabe Dilffusion 1.5 is the basemodel or generation (as in "version"). The checkpoint models are custom models that has been trained on top of SD1.5 . You can think of it as a checkpoint in a race, SD 1.5 have been custom trained with a certain type of dataset material so it will have certain style or characters or other quality. Why it's called a checkpoint is because this is the state this particular SD 1.5 model has been trained to, it has reached this checkpoint. It's like a screenshot of the state the model is in, in it's current training.
So when you download Super Duper Anime Checkpoint, it is the state of a SD1.5 model that you are downloading.
This is the convention behind the name checkpoint. I hope I'm being clear. It's an unnecessarily complicated name convention and naturally very confusing to most. The Lora is an added "mini model" that guides SD towards a specific result. It can be a concept, a character, a style. Embedings are similair to Loras it is just a different implementation. They are essentially added input models on top of the checkpoint model.
 

Fuchsschweif

Well-Known Member
Sep 24, 2019
1,143
1,954
This is the convention behind the name checkpoint. I hope I'm being clear. It's an unnecessarily complicated name convention and naturally very confusing to most. The Lora is an added "mini model" that guides SD towards a specific result. It can be a concept, a character, a style. Embedings are similair to Loras it is just a different implementation. They are essentially added input models on top of the checkpoint model.
So the checkpoints are basically addons, and the loras are something like "prompt collections"?

I think your reply did overlap with my edit on my post above yours. It tried it first with only carrying the prompts over from civitai, but did get different results. Now with the data I extracted from your PNG the result is way closer (and takes way longer though).

How did you know what to add? I see you had the upscaler, denoising, refiner and all of that in your settings.
And this: 1696886923504.png
Or, to put my question different: How would I gather these information from the original model page on civitai? Just the same way, downloading the picture and extracting the settings?

Funny is, in my own version (as above) when I added "nipples" they didn't show. Now with the extracted setting from the original it does. Any idea why?

I am asking because I of course want to understand the mechanisms behind it so that I can create more freely and not just copy and paste prompts all the time. So I have to understand for instance what changed with the extracted settings that made SD listen more closely to my input.

Aaaaand a last question: I can basically turn of the upscaler and only upscale pictures I like later on the extra page in order to safe time?

As I am typing this, the generation with the extracted settings from your PNG is done. It's quite close now, but the colors are still faded!

1696886590014.png
 
Last edited:
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Thanks, I am right now downloading the checkpoint and then proceed with your manual!

Can you explain to me what the difference is between a checkpoint and a model? Checkpoints seems to be more like a basic database, and models are very specific modules?

And the negative embedding are just for excluding / specifing what one dpesn't want? In MJ one would only add a "-" before that desired prompt, like "-hands" would make MJ try to avoid showing hands.

Why are there two different embeddings used for negatives?

And I assume with the merge tool I can combine two checkpoints or models and get both to contribute to the generations?

Edit:

Did everything now except this:



I just took the generation data from the civitai site (the prompts, negative prompts, and brought the sampling steps up to 30. It's way better now but the style still looks different, any idea why? (I also activated the restore faces function as suggested on the 1st page of this thread).


View attachment 2993297

This is what it looks like on my side:

View attachment 2993314
They used a negative Lora and a negative embeding. Both are negative inputs, meaning things you don't want. The reason for using both is because they don't do the same thing I assume. Most of the time I don't use these negative models personally since I don't have direct control of them. Don't worry about merging checkpoints yet, before you get really familiar and proficient with SD. Instead download others merged checkpoints on civitai for now. The difference between a trained checkpoint and merged checkpoint is that a merged checkpoint is combining 2 or more checkpoints into 1. This can result in either very flexible and good checkpoints if you know what you are doing or it can create a completely useless mess. Unfortunately some people think that the more you throw at the wall the more will stick.. So many merged checkpoints on civitai is simply not worth your time. We can only find out if it's good by testing ourself and/or listening to the recommendations of people who's judgment we trust.
 
  • Like
Reactions: Fuchsschweif