[Stable Diffusion] Prompt Sharing and Learning Thread

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
It it true that promts (negative and positive) are limited in tokens? Like I have 1000 words negative promt xD
Nope. Back in the very early days it was limited to 75 tokens, now it's unlimited, just split into 75s.
There may be something in which '75' a certain prompt is placed that affects the image, but that's very hard to separate from more general prompt order effects.

Some work has been done on this and it certainly seems that most LoRAs have the most effect when put at the start of the prompts (there are -ve LoRAs now, so it's true for -ve prompts as well).

The full version of SD allows you to play around with prompt order in the Scripts option at the bottom (X/Y/Z Plot, Prompt order). It will then produce an image for each possible order of the prompts you tell it to move around. With 1000s of prompts that's not really a good idea, but you can limit it to only a handful that get moved around within the huge block of text!
 

Nano999

Member
Jun 4, 2022
172
73
Btw for automatic1111 should I install as it said Python 3.10.6 or Python 3.10.10 - Feb. 8, 2023 would be OK?
The guy from the video (Sebastian) installed 3.10.7
 

Nano999

Member
Jun 4, 2022
172
73
Could you explain the logic behind Inference Steps?
When I compare results beween 20, 30, 40 and so on there are no much difference

Like 12 steps is the same as 50
10 steps is like no guy, but only a penis xD

25 and 30 steps could be a totally different images.

What these steps are doing exactly?
 

fr34ky

Active Member
Oct 29, 2017
812
2,191
Has anyone found any lighting ticks you want to share? I've been looking at some civitai pictures and the most striking thing about some of the pictures is the lighting, many of them usually fake or cut parts of their prompts though, so very hard to get any good data from there.

No hard feelings if you don't want to share your secrets :ROFLMAO:
 
  • Like
Reactions: Jimwalrus

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
Could you explain the logic behind Inference Steps?
When I compare results beween 20, 30, 40 and so on there are no much difference

Like 12 steps is the same as 50
10 steps is like no guy, but only a penis xD

25 and 30 steps could be a totally different images.

What these steps are doing exactly?
They're basically how many times SD runs the prompt against the static. 10 would probably be an absolute minimum, diminishing returns kicks in quite quickly above 25. However, with a decent GPU they're fast to do, much quicker than upscaling steps.
Use that X/Y/Z plotter I mentioned earlier to try it out - set it to Steps and enter something like 1-60. It will then produce 60 images with the same seed, prompts and parameters but each one will have an extra step. The first few will be a blurry mess, the rest will have some differences, sometimes surprising.
The effects also vary a lot depending on the Sampler you use - Euler_a is very susceptible to number of steps with lots of variation. DDIM also can go a bit weird based on the number of steps.

Generally speaking, a higher CFG and more steps should adhere more closely to your prompt. Except when it doesn't and out of nowhere gives you something that looks like Cthulhu swallowed Barney the Dinosaur!
 
  • Like
Reactions: Nano999 and Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Has anyone found any lighting ticks you want to share? I've been looking at some civitai pictures and the most striking thing about some of the pictures is the lighting, many of them usually fake or cut parts of their prompts though, so very hard to get any good data from there.

No hard feelings if you don't want to share your secrets :ROFLMAO:
Try these:



Other things that effects light is choice of upscaler, amount of hi res steps, choice of checkpoint etc etc. Ie everything can have an effect on light.
1. Choice of a good checkpoint that is consistent.
2: A prompt that describes sufficiently what you wish to achieve.
3. Use an addition such as Lora or hypernetwork or embedding to "boost" the light or anything else you want more of.
4. Edit in photoshop for fine tuning and getting everything that you envision.

Sebastian Kampf has a video also about controlling light in SD. He is using ControlNet.
 
Feb 17, 2023
17
56
Finally found a quick way to fix hands in txt2img generations without messing around with impainting. However it does require controlnet.

00035-480550801.png

To quickly get the depth map of a hand you can use this addon:
Use the addon to quickly create a depth map that has the hand in the pose and size you want.

controlnet settings.png

Then just throw your depth map in controlnet, enable it, set preprocessor to none, Model to depth, and set the weight to a very low value of .1 or .15 (so the new image doesn't change a lot). Also need to keep the exact same settings as the original generation, including the seed.

00036-480550801.png

Boom, it's fixed and the image only changed a little bit! It doesn't always work, but when it does it's magical and you barely need to work to get a good result. The controlnet (T) controls can also be tweaked to get even more consistent results but I couldn't get them to work properly.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,776
Has anyone found any lighting ticks you want to share? I've been looking at some civitai pictures and the most striking thing about some of the pictures is the lighting, many of them usually fake or cut parts of their prompts though, so very hard to get any good data from there.

No hard feelings if you don't want to share your secrets :ROFLMAO:
Do throw in light modifiers, and the results might come out a tad better than usual. I.e.: fireflies, fog, rain, lights, fire, dust, sparkles, etc.

When I want unmotivated light I go for fireflies:

00008-4014905313-(realistic photo of gorgeous Playboy playmate model).__anfas, distance._pose ...png
You don't have permission to view the spoiler content. Log in or register now.
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
Any beastiality trained models available xD?
Not the sort of thing that's likely to be openly available! There's plenty of Furry stuff on Civitai which may be adaptable, but that's likely to be it.
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
Also, I would recommend being very wary when downloading either a .pt or a .ckpt file from an untrusted source. These are Python files and can easily be modified to contain malicious scripts. Probably not full ransomware or anything, but it could mean a reinstall of SD or revealing private info.
Anything with a .safetensors extension is fine.
 
  • Like
Reactions: Sepheyer

Nano999

Member
Jun 4, 2022
172
73
Also, I would recommend being very wary when downloading either a .pt or a .ckpt file from an untrusted source. These are Python files and can easily be modified to contain malicious scripts. Probably not full ransomware or anything, but it could mean a reinstall of SD or revealing private info.
Anything with a .safetensors extension is fine.
.pt files from should be fine, right?

What about this list? Is it trusty?


But anyway not sure if a virus scan can identify anything, the files are not even executables?
 
  • Like
Reactions: Jimwalrus

fr34ky

Active Member
Oct 29, 2017
812
2,191
Pro tip: most prompts on civitai LORAs are BS, guys just cut completely their prompts and put only the part that references the LORA. Don't waste your time. Just take the file, trigger word and run.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
If you wish to try a different upscaler, you can do so easily by downloading an upscaler from this wiki page and then place it in the models folder and then into the architecture folder such as ESRGAN. The database is organized neatly into categories for what kind of material the upscaler is intended for.

Upscaler Database.png
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
In my post about light I forgot to mention sampling method. Of course sampling method can have a big effect on the end result including the quality of the light. Try out all of them, though I have never had any good result with LSM or PLMS.
My favorites are heun and DPM++ SDE, DDIM can be useful also sometimes.
Hi res fix is another things that gives big gains. It's not obvious to me that you gain anything with upscaling in "Extra". I have noticed more to the contrary, that you loose detail and light quality.
So how do we run Hi res fix with more than 2x upscaling?
by doing as the error message say when we get cuda memory error because we ran out of Vram. Setting the max split size.

Paste this: "set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:64".
Inside webui-user.bat.
Fair warning!
This will increase the time significantly for each generation but you will be able to use more than 2x in Hi res fix or at least a higher end resolution.
You can always run SD as normal and then use this little "fix" to run a specific image with a higher end resolution.

With my GTX1070 from 2017 I managed to generate this image with 2,5x in hi res fix. Your mileage may vary depending on wich GPU you have.

( prompt by Synalon ).
00012-785665333.png

And with this image 3x Hi res fix. It would seem that I'm limited to ap 1536px.

(prompt by ).
00013-631618171.png

Or 1584px..

00014-631618171.png
 
Last edited:

Nano999

Member
Jun 4, 2022
172
73
How can I change the weight of lora file?


I figured out to open the Lora tab and click on lora file which give me in the pormt field following text^
<lora:samdoesartsSamYang_offsetRightFilesize:1>

but the page saying to change
weight=1 for the offset version or 0.65 for the original (check "original" tab for a comparison).
PS what is offset version?

And some users who post their results have totally different stuff:
<lora:samdoesartsSamYang_normal:0.5
Like what is normal, what is 0,5? When I use this promt cmd saying there are no lora with such a name
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
How can I change the weight of lora file?


I figured out to open the Lora tab and click on lora file which give me in the pormt field following text^
<lora:samdoesartsSamYang_offsetRightFilesize:1>

but the page saying to change
weight=1 for the offset version or 0.65 for the original (check "original" tab for a comparison).
PS what is offset version?

And some users who post their results have totally different stuff:
<lora:samdoesartsSamYang_normal:0.5
Like what is normal, what is 0,5? When I use this promt cmd saying there are no lora with such a name
Ensure that you've installed the , you can do this from the extensions tab. After installing you will need to restart AUTOMATIC1111.

Installation:
  1. Open "Extensions" tab.
  2. Open "Install from URL" tab in the tab.
  3. Enter URL of this repo to "URL for extension's git repository".
    This -->
  4. Press "Install" button.
  5. Restart Web UI.
  6. Put the LoRA models (*.pt, *.ckpt or *.safetensors) inside the sd-webui-additional-networks/models/LoRA folder.
    Alternatively inside: Stable-Diffusion\stable-diffusion-webui\models\Lora
  7. Go to "settings" "Additional Network" paste the path for your Lora folder
    ex. G:\Downloads\Stable-Diffusion\stable-diffusion-webui\models\Lora
    and click "apply settings".
  8. In text2img or img2img expand the "Additional Networks" menu near the bottom and select "Enable" then select a Lora for "Model1" and adjust "weight1" , 0.8 is a good starting point. You can combine several Lora's but the result might vary. Some Lora's has specific key words or phrases for the prompt ex "shirtlift" or "braless" etc.
  9. Create awesome images .
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
How can I change the weight of lora file?


I figured out to open the Lora tab and click on lora file which give me in the pormt field following text^
<lora:samdoesartsSamYang_offsetRightFilesize:1>

but the page saying to change
weight=1 for the offset version or 0.65 for the original (check "original" tab for a comparison).
PS what is offset version?

And some users who post their results have totally different stuff:
<lora:samdoesartsSamYang_normal:0.5
Like what is normal, what is 0,5? When I use this promt cmd saying there are no lora with such a name
<lora:samdoesartsSamYang_offsetRightFilesize:1> in the prompts 'activates' the LoRA named 'samdoesartsSamYang_offsetRightFilesize' (provided you've downloaded it, put it in the models\Lora folder and Refreshed the LoRAs within the Webui). The ":1" part sets it to strength 1.
The syntax is simply Name_of_LoRA:n.d where n is the strength you want (with d as the decimal if you want fine tuning)

N.B. SD uses the UK/US thousands and decimal convention of commas for thousands and periods for decimals.