[Stable Diffusion] Prompt Sharing and Learning Thread

jarman Kell

New Member
Oct 11, 2020
9
8
And to add to what devikkw said, it's down to the images the checkpoint have been trained on also. It looks great regardless, good job. :) (y)
I’m using everything v3. I’ve tried everything v4 and the lighting seems off. Thanks again after a while I will try to remove the armor prompt and try again. Thanks again.

also on a off note could I post a picture of an ai generated image, with the question of what checkpoint is used?

basically I’m asking if the people of this forum know most checkpoints people use for ai generation. Thanks again.
 

jarman Kell

New Member
Oct 11, 2020
9
8
There are models on Civitai. I've had a go with both (in conjunction with the SunsetRiders LoRA) and they both seem OK.
Hello I’m relatively new to stable disfusion and I’m seen a lot of post mentioning LoRA’s what are these? Are they checkpoints?
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
I’m using everything v3. I’ve tried everything v4 and the lighting seems off. Thanks again after a while I will try to remove the armor prompt and try again. Thanks again.

also on a off note could I post a picture of an ai generated image, with the question of what checkpoint is used?

basically I’m asking if the people of this forum know most checkpoints people use for ai generation. Thanks again.
Now that there are so many different checkpoints, the only way to know this is to ask the one who generated the image.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Hello I’m relatively new to stable disfusion and I’m seen a lot of post mentioning LoRA’s what are these? Are they checkpoints?
It's like a mini checkpoint that is injected in the image generation for a more controlled result. It can be of a style or character or other concept.

" - LoRA. Low Rate Adaptation. 9-400MB in size. An advanced type of embedding that used to require crazy amounts of vRAM on your GPU, now works OK with as little as 8GB. If you want to generate a very specific type of thing such as a specific model of car or a celebrity, you might use a LoRA. Advantage is they can be trained on a small number of images (as little as 3!). Disadvantage is that they often take over and don't always play well with other LoRAs. They have .ckpt, .pt or .safetensors file extensions. Make sure you don't put them in the \models\stable-diffusion folder, they go in the models\Lora folder. "

You can find more information in this awesome glossery by no other than the eminent Jimwalrus.
 

Kaseijin

Active Member
Jul 9, 2022
590
1,000
HI!
I haven't do anything yet because my PC dies. In the meantime I get a new one, I have a question to you guys.
There are a page that "transform" images of HS2 into a near close real ones.
Take a look:
You don't have permission to view the spoiler content. Log in or register now.

My question is:
Is there a way to do the opposite? I mean, input a real portrait image and get an HS/AIS look-like image?
 
  • Like
Reactions: Mr-Fox and Sepheyer

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
HI!
I haven't do anything yet because my PC dies. In the meantime I get a new one, I have a question to you guys.
There are a page that "transform" images of HS2 into a near close real ones.
Take a look:
You don't have permission to view the spoiler content. Log in or register now.

My question is:
Is there a way to do the opposite? I mean, input a real portrait image and get an HS/AIS look-like image?
Sure, in fact if you look at the sample images on that page many of them are taking a real photo and converting it.
I don't actually know what "HS/AIS" means, but I'm assuming it's an art style similar to that of the first image above - in which case there should be quite a few Checkpoints you can use as the base.
Put the original into Img2Img, set a medium level of 'Denoising strength' and the right Checkpoint, add a prompt and away you go.
Once your PC is up and running you can give it a try - the experimentation's part of the fun of this.
 

jarman Kell

New Member
Oct 11, 2020
9
8
does anyone know where upscalers go in the stable diffusion folder? so there appear in the drop down menu for upscaler?
 

Kalpod42

New Member
Jul 28, 2020
3
1
I'm so damn confused by SD. I'm just getting started with it, and I've been able to make some pretty good images, or at least one's I'm reasonably happy with. That said, sometimes it seems to get "stuck" in a pose or idea, and no matter what I change in the prompt or what image I use in img2img, it can't let go.

For example. I'm trying to get an anal missionary view. I'm using this image as the start
You don't have permission to view the spoiler content. Log in or register now.

These are the sorts of images I'm getting.
You don't have permission to view the spoiler content. Log in or register now.
The first is straight up unholy, and while the second one isn't nearly as bad, its not what I'm trying to generate, which is very explicit in the prompt. Is there something I'm missing here?

FWIW, I've tried messing with the seed, though my understanding is that shouldn't matter as much in img2img?
 

jarman Kell

New Member
Oct 11, 2020
9
8
Not only there, if upscaler is ESRGRAN based goes there, but there are other format, like SWINR, and they go in SWINR folder.
Is there a way to tell what they are based off of? Most of the one I download from civitai are .safetensor (probably spelled wrong):
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Sure, in fact if you look at the sample images on that page many of them are taking a real photo and converting it.
I don't actually know what "HS/AIS" means, but I'm assuming it's an art style similar to that of the first image above - in which case there should be quite a few Checkpoints you can use as the base.
Put the original into Img2Img, set a medium level of 'Denoising strength' and the right Checkpoint, add a prompt and away you go.
Once your PC is up and running you can give it a try - the experimentation's part of the fun of this.
HS (Honey select) AIS (AI Girl).
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
I did put them in that folder and restarted the webui. But they’re not appearing it’s kind of weird. I will have to do some more digging that’s again.
Did you do a full restart or only a webui reload? You need to close the cmd window and restart webui-user.bat .
 
  • Like
Reactions: Jimwalrus

Sharinel

Active Member
Dec 23, 2018
519
2,151
I'm so damn confused by SD. I'm just getting started with it, and I've been able to make some pretty good images, or at least one's I'm reasonably happy with. That said, sometimes it seems to get "stuck" in a pose or idea, and no matter what I change in the prompt or what image I use in img2img, it can't let go.

The first is straight up unholy, and while the second one isn't nearly as bad, its not what I'm trying to generate, which is very explicit in the prompt. Is there something I'm missing here?

FWIW, I've tried messing with the seed, though my understanding is that shouldn't matter as much in img2img?
From a very quick look at your png's it's absolutely positively definately (maybe) your denoising strength.

You have yours at 0.75 which is starting to get into "Fuck this prompt, imma do my own thing" territory, anything over 0.2 starts to melt the prompt.

Here's a plot I made of the original photo (I didn't download the Loras so used the jpg as the basis)
xyz_grid-0001-216977857.jpg

If you look at 0.5 things are already starting to go wonky, then at .75 it's nothing like the original. Try toning it down a bit, or doing your own x/y/z/ prompt to see how it would look if you have a pic you think you could use as a base.
And apologies if you are already aware of this but you find this in the script area, x/y/z is one of the dropdown option - set it up as so. Just try not to have 4 batches like I did as I'm a muppet :)

1683569608732.png
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
I'm so damn confused by SD. I'm just getting started with it, and I've been able to make some pretty good images, or at least one's I'm reasonably happy with. That said, sometimes it seems to get "stuck" in a pose or idea, and no matter what I change in the prompt or what image I use in img2img, it can't let go.

For example. I'm trying to get an anal missionary view. I'm using this image as the start
You don't have permission to view the spoiler content. Log in or register now.

These are the sorts of images I'm getting.
You don't have permission to view the spoiler content. Log in or register now.
The first is straight up unholy, and while the second one isn't nearly as bad, its not what I'm trying to generate, which is very explicit in the prompt. Is there something I'm missing here?

FWIW, I've tried messing with the seed, though my understanding is that shouldn't matter as much in img2img?
It will only get "stuck" if you are using the same seed over and over or if you are using a dominant embedding or hypernetwork or Lora etc. Did you try lowering the denoising strength? If you don't get the desired result despite the prompt, increase the cfg scale. Keep in mind that the quality of the source image can have an effect on the image quality of the generated image.

I reworked the prompt a little
00038-1583820884.png