[Stable Diffusion] Prompt Sharing and Learning Thread

picobyte

Active Member
Oct 20, 2017
639
689
If with softer you mean blurry, then blurry or bokeh, and weight like (blurry:1.5) to make it stronger. lighting makes a difference, e.g. rim lighting, you can use certain artist styles for a strong effect, there are certain checkpoints and Lora's that have this effect. Don't use certain words like ultra detailed, and there are several more. Install the extension with wildcards and look through them to get an impression what tokens have this effect, and which ones you should not use.
 

pazhentaigame

New Member
Jun 16, 2020
13
3
If with softer you mean blurry, then blurry or bokeh, and weight like (blurry:1.5) to make it stronger. lighting makes a difference, e.g. rim lighting, you can use certain artist styles for a strong effect, there are certain checkpoints and Lora's that have this effect. Don't use certain words like ultra detailed, and there are several more. Install the extension with wildcards and look through them to get an impression what tokens have this effect, and which ones you should not use.
thanks I ask some page I follow turn out he edit them in adobe lightroom to get that effect
 

picobyte

Active Member
Oct 20, 2017
639
689
Heh, that would work too
00061-1196463584.png 00059-1196463582.png 00049-1196463572.png 01586-303991053.png 01584-303991053.png

Edit: The first three images had prompts that had a typo,a double ':' before a lora weight. No clear error and consequently none of the lora models actually loaded, so Images were good only due to the dreamshaperPixelart model in use (and prompt). When corrected the simulatneous loading of Lora's caused images to get garbled. I had to use the block weights extension to fix this. Now I get again some usable image, completely different style. Last two images are the result of this, not perfect, but I was trying to generate some corruption/sex toy 'inventory' screen.
 
Last edited:
  • Like
Reactions: pazhentaigame

picobyte

Active Member
Oct 20, 2017
639
689
If you want to adjust a single image, then just gimp, it or so. If you want to affect all generated images then you'll have to adjust the prompt. . , Artist style, or described in and . Personally I don't really use styles that often. BTW also the negative prompt matters. E.g any+ of style of Paul Barson, style of Oleg Oprisco, style of Brandon Woelfel, style of John Atkinson Grimshaw, style of Johan Hendrik Weissenbruch
Or you can drop the image in the extension (of which I am the maintainer, actually) to see what tokens it produces.
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
What prompt can make the overall of image softer
I try some less contrast or soft theme
there no difference
Use the word "soft" .. in different descriptions such as "soft light", "soft shadow", "soft edge" also use "bokeh", "filmgrain". Try different samplers. Try "clarity" in negative, probably need to add/retract weight. "diffused light" has a softening effect. If you are ither using hiresfix or upscaling use ultrasharp for softer details. Nmkd gives more crisp edges.
 
Last edited:
  • Like
Reactions: Jimwalrus

Sharinel

Active Member
Dec 23, 2018
508
2,103
Heh, that would work too
View attachment 2931451 View attachment 2931454 View attachment 2931456 View attachment 2932680 View attachment 2932678

Edit: The first three images had prompts that had a typo,a double ':' before a lora weight. No clear error and consequently none of the lora models actually loaded, so Images were good only due to the dreamshaperPixelart model in use (and prompt). When corrected the simulatneous loading of Lora's caused images to get garbled. I had to use the block weights extension to fix this. Now I get again some usable image, completely different style. Last two images are the result of this, not perfect, but I was trying to generate some corruption/sex toy 'inventory' screen.
Did a plot of the first image. I don't have any of the loras or the vae so took them all out. Was left with this. :-

xyz_grid-0000-1196463584.jpg
 
  • Like
Reactions: Mr-Fox

picobyte

Active Member
Oct 20, 2017
639
689
You are using a different checkpoint. The one I was using was , also the seed is on CPU, you can set that in the settings. I have an AMD gpu and cpu is more platform independent. You shouldn't need the lora's because I made a mistake and none actually loaded :D Your images are fairly ok. To reproduce my last two images require the extension, and those did use the Loras. Steps beyond 20 are fairly stable, usually. A parameter to play with, beside seed is CFG scale. Also if you have multiple checkpoints you could use that as Z axis.
Don't use too many different for each, or your matrix will end up being too big (another setting allows creating larger images, but better is to just do several queries in a row). Also the Agent scheduler is a nice extension to have.
You don't have permission to view the spoiler content. Log in or register now.
00008-180509750.png
 
Last edited:
  • Like
Reactions: Mr-Fox

Sharinel

Active Member
Dec 23, 2018
508
2,103
You are using a different checkpoint. The one I was using was , also the seed is on CPU, you can set that in the settings. I have an AMD gpu and cpu is more platform independent. You shouldn't need the lora's because I made a mistake and none actually loaded :D Your images are fairly ok. To reproduce my last two images require the extension, and those did use the Loras. Steps beyond 20 are fairly stable, usually. A parameter to play with, beside seed is CFG scale. Also if you have multiple checkpoints you could use that as Z axis.
Don't use too many different for each, or your matrix will end up being too big (another setting allows creating larger images, but better is to just do several queries in a row). Also the Agent scheduler is a nice extension to have.
You don't have permission to view the spoiler content. Log in or register now.
View attachment 2933340
You misunderstand me, I wasn't trying to replicate your picture but was just showing people the difference the checkpoint can make. I'll disagree with you on steps though, especially if you use a lot of loras or adetailer. I regularly get artifacts on lower steps, I find 40+ to be where those disappear.
 
  • Like
Reactions: Mr-Fox

picobyte

Active Member
Oct 20, 2017
639
689
Ok indeed misunderstood, and you are probably right with the more steps:
if you use a lot of loras or adetailer I regularly get artifacts on lower steps, I find 40+ to be where those disappear.
I run on cpu, though, so for me 40 steps is not really an option anyway. If lora's have issues with 20 I use the blocks weight extension, other tricks or drop them.

In case you need one, here's a guidemaybe not the one you were looking for

00041-180509854.png 00011-180509992.png 00012-180509987.png 00017-180509956.png 00019-180509950.png 00027-180509907.png 00030-180509893.png 00032-180509882.png 00034-180509877.png 00038-180509862.png 00042-180509853.png 00057-180509613.png 00080-180509687.png 00084-180509697.png
 
Last edited:
  • Like
Reactions: Mr-Fox

daddyCzapo

Member
Mar 26, 2019
241
1,492
AI army i need your help. I'm trying to get along with comfy ui, but it turns out it's hard as fuck to me for some reason.. Does anyone have a written tutorial that is detailed and explains everything or at least most of the functionality and "what does what"? In a1111 you just punch in some words and numbers and the image pops up, with the nodes and shit "i am confusion" :HideThePain:

Edit. alright i have managed to deduce why my images look like garbage, turns out that denoise in ksampler node works a little different than denoising in a1111
 
Last edited:

picobyte

Active Member
Oct 20, 2017
639
689
Have you tried ? I've run it a few times but I find automatic easier, maybe for later for me.
 
  • Like
Reactions: daddyCzapo

daddyCzapo

Member
Mar 26, 2019
241
1,492
These are only custom nodes, workflows and templates. i need a description of what each node does. I am reverse engineering some templates right now to figure out some things, but it will take a looot of time.
 

Sharinel

Active Member
Dec 23, 2018
508
2,103
AI army i need your help. I'm trying to get along with comfy ui, but it turns out it's hard as fuck to me for some reason.. Does anyone have a written tutorial that is detailed and explains everything or at least most of the functionality and "what does what"? In a1111 you just punch in some words and numbers and the image pops up, with the nodes and shit "i am confusion" :HideThePain:

Edit. alright i have managed to deduce why my images look like garbage, turns out that denoise in ksampler node works a little different than denoising in a1111
I quite liked this video guide, esp the first 5-6 mins.

 

daddyCzapo

Member
Mar 26, 2019
241
1,492
You can load a png file like a .json workflow file.
It will load the full pipeline if it was generated in comfy. Never tried it on a Auto1111 png
Yup, that's what i've been doing to learn. it's much easier to see what is connected where, when you can move nodes, mess with links etc.