[Stable Diffusion] Prompt Sharing and Learning Thread

Sharinel

Active Member
Dec 23, 2018
508
2,103
Re-running the exact same thing with increased "Upscale by" slider from 1 to 2 (I think the previous one was blurry because 1 meant = same resolution?), I got this:

View attachment 3002841


So already way better in terms of sharpness, although I am not 100% satisfied with the result.



But funny that it did now recreate 1:1 the same picture.

Although the seed is still from the previous picture.

Why does SD not recreate from the seed, I don't get it :WaitWhat:
I really like this pic. Here's a slightly amended version in Juggernaut XL

00331.png


And here's how I got it

1697214906032.png

Main changes were the sampling method (I like Adaptive samplers as more steps = better imo), the CFG and the upscaler.

Shows the difference using a different checkpoint can make.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
Why would it not show me the seed of the exact picture I put in?
And where did you get the correct seed then? :unsure:
There's an extra section you can use a variation seed to find variations with the same image.
The original seed :2549670335 . The variations seed: 822547809.
Variation seed.png
Out of curiosity I used the "variation seed" as the image seed. Meaning no extra selected and no variation but with 822547809 as the main seed. Hope I'm being clear..

Without hiresfix:
00079-822547809.png
With hiresfix:
00080-822547809.png
I like the perspective but the face isn't great.
 
Last edited:

Fuchsschweif

Active Member
Sep 24, 2019
961
1,515
Main changes were the sampling method (I like Adaptive samplers as more steps = better imo)
What's the difference between my default one and the one you picked? Aren't the steps just determined by the step slider? Or do you mean that it works better with higher step numbers?


There's an extra section you can use a variation seed to find variations with the same image.
The original seed :2549670335 . The variations seed: 822547809.
Both don't work for me, see my post above! Really weird. https://f95zone.to/threads/stable-diffusion-prompt-sharing-and-learning-thread.146036/post-11948620


Out of curiosity I used the "variation seed" as the image seed. Meaning no extra selected and no variation but with 822547809 as the main seed. Hope I'm being clear..
No hiresfix:
You even get a closer result with the variation seed than I do with the OG seed.. (although mine made use of hiresfix)
 
Last edited:
  • Like
Reactions: Mr-Fox

Sepheyer

Well-Known Member
Dec 21, 2020
1,528
3,598
Any recommendations for Open Pose editors?

I use this one but if y'all got better ones, do let me know. Thanks!
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
View attachment 3003002

I have an nvidia card, should I switch to NV?

I think this isn't the root issue though. SD has no problem re-creating the same picture over and over again, it's just not the one from the seed..
it recreates it for YOU yes, but if you share images with the intention of others being able to recreate it or you trying to recreate other ppls images, two of those options have very specific conditions.
GPU will only give the same results IF both have the same GPU.
Nvidia should work for anyone using Nvidia cards but it's not very commonly used.
CPU is the one that's most "stable" and most easily shared.
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Active Member
Sep 24, 2019
961
1,515
it recreates it for YOU yes, but if you share images with the intention of others being able to recreate it or you trying to recreate other ppls images, two of those options have very specific conditions.
GPU will only give the same results IF both have the same GPU.
Nvidia should work for anyone using Nvidia cards but it's not very commonly used.
CPU is the one that's most "stable" and most easily shared.
Okay, but apparently while being set to "GPU" everyone here is able to recreate with my seed, except me :LOL:
 
  • Sad
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
it recreates it for YOU yes, but if you share images with the intention of others being able to recreate it or you trying to recreate other ppls images, two of those options have very specific conditions.
GPU will only give the same results IF both have the same GPU.
Nvidia should work for anyone using Nvidia cards but it's not very commonly used.
CPU is the one that's most "stable" and most easily shared.
Mine is set to GPU... I don't remember if I changed it or if it's default.:unsure:
I'm not at all convinced this is the issue though.
 

Fuchsschweif

Active Member
Sep 24, 2019
961
1,515
So, nobody knows what's going on?
I pick the exact same seed as you guys, use the same settings, thick the same boxes, but get wildly new creations. Or do you see any mistakes?

1697216081529.png
 

me3

Member
Dec 31, 2016
316
708
Mine is set to GPU... I don't remember if I changed it or if it's default.:unsure:
I'm not at all convinced this is the issue.
Think the two of you just happen to have the same GPU, seem to remember both of you having 1070, unless i'm mixing up some details
 
  • Like
Reactions: Mr-Fox

Fuchsschweif

Active Member
Sep 24, 2019
961
1,515
CFG Scale is twice as high
HMMMMMMMMM. At 8 it worked. But when I played around with it earlier and tried extrem high and low values it didn't.

But 16 is from the original seed. Why would I need to change the CFG scale if it's set to the one that had the OG seed / picture as outcome?


So let's say I create a brand new picture I like with CFG set to 16. Then I take that seed to generate it again, shouldn't the CFG stay at 16 to have the same generation settings as before? I mean, I want the same pic.
 
  • Thinking Face
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
So, nobody knows what's going on?
I pick the exact same seed as you guys, use the same settings, thick the same boxes, but get wildly new creations. Or do you see any mistakes?

View attachment 3003033
I use a different upscaler. You don't have ultrasharp. Not so sure this is the issue though. If you can't find Ultrasharp it's because you need make sure you have these selected in settings/upscaling:
Upscalers visible.png
 

Fuchsschweif

Active Member
Sep 24, 2019
961
1,515
HMMMMMMMMM. At 8 it worked. But when I played around with it earlier and tried extrem high and low values it didn't.
So here's my result with CFG at 8.Now that looks good!

1697216919036.png

I assume the following: When you create a picture, the CFG determines how much SD should follow your prompt and how much freedom you give it to deviate from it.

When you however try to recreate from a seed, the CFG determines now how much SD should follow your prompt vs. the seed. What was before SD's "freedom" (= how much not to follow the prompt) is now SD's precision (= how much now to follow the prompt).

Does that make sense?

But as close as this is, let's assume I want to only get an almost exact copy of the OG, with the same angles, proportions, buildings. I just want the details to be finished (because I started to generate bulk picture initially in low res to find one I like). Is that possible? Or do we always have to sacrifice some of the original?
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
So here's my result with CFG at 8.Now that looks good!

View attachment 3003072

I assume the following: When you create a picture, the CFG determines how much SD should follow your prompt and how much freedom you give it to deviate from it.

When you however try to recreate from a seed, the CFG determines now how much SD should follow your prompt vs. the seed. What was before SD's "freedom" (= how much not to follow the prompt) is now SD's precision (= how much now to follow the prompt).

Does that make sense?

But as close as this is, let's assume I want to only get an almost exact copy of the OG, with the same angles, proportions, buildings. I just want the details to be finished (because I started to generate bulk picture initially in low res to find one I like). Is that possible? Or do we always have to sacrifice some of the original?
You almost always get a little variation in the details such as buildings in the background etc. There are a few options though.
You can use that image as input in controlnet lineart (realistc) or you can work in the img2img tab.
 

me3

Member
Dec 31, 2016
316
708
So here's my result with CFG at 8.Now that looks good!

View attachment 3003072

I assume the following: When you create a picture, the CFG determines how much SD should follow your prompt and how much freedom you give it to deviate from it.

When you however try to recreate from a seed, the CFG determines now how much SD should follow your prompt vs. the seed. What was before SD's "freedom" (= how much not to follow the prompt) is now SD's precision (= how much now to follow the prompt).

Does that make sense?
No, CFG is always how strictly it should follow the prompt. Seed doesn't factor into how strict the prompt is followed.
 

Fuchsschweif

Active Member
Sep 24, 2019
961
1,515
No, CFG is always how strictly it should follow the prompt. Seed doesn't factor into how strict the prompt is followed.
Yeah but maybe in this case, following the prompt more strict equals = deviating further away from the seed. Because it tries to add more new stuff based on the prompts instead of gathering from the seed. At least that would explain why CFG 16 doesn't work but 8 does.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
That's what I originally did, when you guys said I should instead take the seed and use txt2image because it would work better :LOL:
It were a different image and different scenario. If you want to keep an image but only refine it in other words upscale it do this in img2img tab with the extension "SD Upscale" it works the same as hiresfix but in img2img tab instead. This means that you will keep the image just like you want. You can play a bit with denoising strength to tease out a bit more details also. If you use too much it will instead make the image worse.
 

Fuchsschweif

Active Member
Sep 24, 2019
961
1,515
It were a different image and different scenario. If you want to keep an image but only refine it in other words upscale it do this in img2img tab with the extension "SD Upscale" it works the same as hiresfix but in img2img tab instead. This means that you will keep the image just like you want. You can play a bit with denoising strength to tease out a bit more details also. If you use too much it will instead make the image worse.
It was the same scenario IIRC, I just wanted to upscale a low res picture that I liked as close to the original as possible. So in that case image2image is then the way to go..