• To improve security, we will soon start forcing password resets for any account that uses a weak password on the next login. If you have a weak password or a defunct email, please update it now to prevent future disruption.

[Stable Diffusion] Prompt Sharing and Learning Thread

Microtom

Well-Known Member
Sep 5, 2017
1,051
3,595
anyone else gettin annoyed with Civitai now? this mess is throughout the pages. on the side, at the bottom, at the top, a pop up, mixed with the models results, etc etc..gotta keep blocking with ublock..annoying

What they are doing is extremely expensive. The computing power to train. The storage of checkpoints. The bandwidth.

It's an incredibly useful site and it's all really well done. I gave 5 bucks yesterday, not because I needed to purchase anything, but just as a contribution.
 

modine2021

Member
May 20, 2021
329
1,048
What they are doing is extremely expensive. The computing power to train. The storage of checkpoints. The bandwidth.

It's an incredibly useful site and it's all really well done. I gave 5 bucks yesterday, not because I needed to purchase anything, but just as a contribution.
i understand all that. its still annoying. oh well i got it fixed :cool:
 
  • Like
Reactions: Mr-Fox

felldude

Member
Aug 26, 2017
467
1,430
anyone else gettin annoyed with Civitai now? this mess is throughout the pages. on the side, at the bottom, at the top, a pop up, mixed with the models results, etc etc..gotta keep blocking with ublock..annoying

Tensorart has contacted me a few times I guess it is good to have an alternate if civitai changes to much.
(I assume they will get sued if they start to monetize to much.)

They (Tensorart) didn't impress me with the we train on a 4090, I would have figured any of these sights should be running a NVIDIA RTX 6000 ADA workstation. (Assuming the person wasn't lying about being an operator for tensorart)

I'm guessing the sites are just paying for computing power from google colab or something.
 
Last edited:

Delambo

Newbie
Jan 10, 2018
99
84
anyone else gettin annoyed with Civitai now? this mess is throughout the pages. on the side, at the bottom, at the top, a pop up, mixed with the models results, etc etc..gotta keep blocking with ublock..annoying

Nope. I mean, look around this site. I appreciate ad supported content, subscribing to every website I use would be a PITA.
 
  • Like
Reactions: modine2021

felldude

Member
Aug 26, 2017
467
1,430
For those who are training using Kohya hopefully they update to use the Cosine Annealing Scheduler.

It decreases the learning rate as the epochs increase preventing the famous increase of loss at certain points.

This was posted in the discussions as a workaround


--lr_scheduler_type "CosineAnnealingLR" --lr_scheduler_args "T_max=1" "eta_min=1e-4" "verbose=True"
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,791
A1111 Guide. Using xyz plot script for comparison tests.

Get in the habit of using xyz plot script to run comparison tests to see the results from different checkpoint models, samplers, the cfg scale, sample steps etc.
In order to find what you are after faster.

Work on the prompt first with a ckpt model you often use.
Use general standard settings.
When you have a prompt that is near finished (only minor tweaks left), run xyz plot script for your tests with a good static seed.
You don't have permission to view the spoiler content. Log in or register now.
You find xyz plot in the script drop down menu.

Plot Scripts.png
xyz plot will run your prompt against all the ckpt models you choose for example and then create an image grid.
You can run a test with more than one variable ofc but I recommend to only do one at a time.

In this order:
1. Checkpoint Model.
2. Sampler.
3. CFG Scale.
4. Sample Steps.

xyz_grid-0001-3533565512.png

I choose Photon_v1 and proceed with the Sampler test.
xyz_grid-0002-3533565512.png

I often come back to good old Euler a.
Next up is CGF Scale.
You can either type out each value like this: 4,5,6, etc but who has the time or patience..
Instead set a range and the number of images in square brackets like this "4-12 [5]" (results in increments of 2: 4,6,8,10,12).
xyz_grid-0003-3533565512.png

You can now run a more fine test with a smaller range, "8-10 [5]" (results in increments of 0.5).
xyz_grid-0004-3533565512.png

Lets go with CFG Scale 9.5 .
Next test is the sample steps. How many is it gonna be? "20-60 [5]" (will result in increments of 10).
xyz_grid-0005-3533565512.png

Ok let's test a smaller range now, "25-40 [4]" (will result in increments of 5).
xyz_grid-0006-3533565512.png

Decisions decisions..
We can of course continue with even finer test and smaller range. Yeah lets make it easier for our self and avoid guessing.
"26-34 [5]" (will result in increments of 2).
xyz_grid-0007-3533565512.png

Ok 28 it is.
Now we can make some last fine adjustments to the prompt before upscaling.
Let's get rid of that giraffe neck and put some color on those cheeks while we're at it.
Updates for the prompt: (blush) Neg: (long neck).
I'm inpainting her eyes in img2img with after detailer and then use SD Upscale Script to upscale
I pick 4x_NMKD-Siax_175k because it will give back a little warmth to the image compared to 4x_NMKD-Siax_200k .
I could use xyz plot script again for the upscaler but this post is long enough already.:geek:
Also lets give her a fitting name. How's Ruby?
00014-3533565512.png 00005-3533565512.png

*Edit
Updated upscaling process and hires images.
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,791
It turns out I had an error somewhere along the way while making the guide in my last post. I haven no idea when it happened. I will try to go back through the steps and see if I get a different result now after the reboot.
Here's hoping that I don't have to remake this whole thing..:FacePalm:
1706468261175.png
 

theMickey_

Engaged Member
Mar 19, 2020
2,077
2,610
It turns out I had an error somewhere along the way while making the guide in my last post.
To be honest -- even if you had made a mistake along the way, it's still a great guide and I think everyone gets the steps you're doing to get the optimal result! Great guide, bookmarked! :-D

And btw: if you know a decent "Custom Node" which would let me do something like that in ComfyUI (like selecting multiple checkpoints, running the same prompt, getting a comparison picture as a result; same for different samplers and CFG values etc.) that would be awesome. It's something I always wanted to do more easily, right now I'm doing this kind of manual and creating those images by pasting single images into a larger picture one by one -- which is OK to get an overview, but not great if you want to do this for a new prompt.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,791
To be honest -- even if you had made a mistake along the way, it's still a great guide and I think everyone gets the steps you're doing to get the optimal result! Great guide, bookmarked! :-D

And btw: if you know a decent "Custom Node" which would let me do something like that in ComfyUI (like selecting multiple checkpoints, running the same prompt, getting a comparison picture as a result; same for different samplers and CFG values etc.) that would be awesome. It's something I always wanted to do more easily, right now I'm doing this kind of manual and creating those images by pasting single images into a larger picture one by one -- which is OK to get an overview, but not great if you want to do this for a new prompt.
No unfortunately I'm a stubborn boneheaded type that has refused so far to give in to the spagetti lord almighty.
Sepheyer is however well deep into that particular "sect" and can probably help you out, me3 might be the grand masters holy prophet himself and can probably provide some guidance on the path to pasta kingdom too.. :D
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,519
3,581
To be honest -- even if you had made a mistake along the way, it's still a great guide and I think everyone gets the steps you're doing to get the optimal result! Great guide, bookmarked! :-D

And btw: if you know a decent "Custom Node" which would let me do something like that in ComfyUI (like selecting multiple checkpoints, running the same prompt, getting a comparison picture as a result; same for different samplers and CFG values etc.) that would be awesome. It's something I always wanted to do more easily, right now I'm doing this kind of manual and creating those images by pasting single images into a larger picture one by one -- which is OK to get an overview, but not great if you want to do this for a new prompt.
I trust you know there are a few grid nods, but indeed they are all pretty crappy for a variety of reasons. If you don't know that there are grid nods at all, let us know and we'll hook you up.
 

felldude

Member
Aug 26, 2017
467
1,430
1.jpg

Testings the arguments above, Im not sure I'll get the .01 loss I am looking for but we will see


No unfortunately I'm a stubborn boneheaded type that has refused so far to give in to the spagetti lord almighty.
Sepheyer is however well deep into that particular "sect" and can probably help you out, me3 might be the grand masters holy prophet himself and can probably provide some guidance on the path to pasta kingdom too.. :D
Weren't you doing training with command line and writing .py and yaml, you should have given in to spaghetti lord a long time ago :)
 
Last edited:
  • Haha
Reactions: DD3DD and Mr-Fox

devilkkw

Member
Mar 17, 2021
282
961
Some consideration after 2 week of CUI experiment, for who want try it after using a1111.

First of all understanding how CUI work is the point. There are different approach from a1111 and someone get "scared" about node connection, but here and online you find good workflow for start.

Another important thing is don't think put prompt in CUI with same embedding and lora get same result's as a1111. NO!

For what i see CUI is really bad (or better, your choice) in positional argument. Positive and Negative indeed.
Moving an embedding, lora, or a simple word is a game change, so if your prompt work in a1111, don't mean work in same way on CUI.

Also balancing embedding and lora is pain in the ass, you really need experiment and find correct position and weight of every embedding/lora you push in.

I know there are many user here using CUI from almost time, and they have surely better knowledge on it, this is some consideration i found using CUI.

A little trick for consistent prompting:
Enclose your prompt and little weight it all give better result in mostly case!
For example, this prompt:

beautiful blonde (afro:1.2) woman bathing in a river, cinematic shot, (embedding:horror), other embedding

transformed is:
((beautiful blonde (afro:1.2) woman bathing in a river, cinematic shot, (embedding:horror)):1.01), other embedding

note the double "((" at strat and enclose "):1.01)" at the end, also embedding:horror is a style embedding i want applied on the image, so i enclose it, then other embedding for composition (like detailer) get out of it.

With these trick i get consistent result without washing out full composition loading embedding and lora.

Maybe you also know it, but is good to keep in mind.

And now a question: is there a way to copy group nodes from a workflow to another? i didn't find it.
 

felldude

Member
Aug 26, 2017
467
1,430
I ended up at .05 instead of .067 with the same number of epochs (Cosine vs Cosine Annealing)

Still has to many anomaly's to enter the in my opinion.

ComfyUI_01481_.png
 
  • Like
Reactions: devilkkw and Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,791
View attachment 3304621

Testings the arguments above, Im not sure I'll get the .01 loss I am looking for but we will see




Weren't you doing training with command line and writing .py and yaml, you should have given in to spaghetti lord a long time ago :)
I had no idea I could write .py or yaml. I wish someone had told me..:LOL:
It would have been very useful many times probably..:geek:
 

felldude

Member
Aug 26, 2017
467
1,430
I had no idea I could write .py or yaml. I wish someone had told me..:LOL:
It would have been very useful many times probably..:geek:
If you read and then change a variable in the line I posted above about using the CosineAnnealingLR then you will have written .py

--lr_scheduler_type "CosineAnnealingLR" --lr_scheduler_args "T_max=1" "eta_min=1e-4" "verbose=True"

And if you are a cuck then you have probably eaten pie...also I have been drinking...a lot

Before I blackou...sleep if your using koyaha you can paste the line here:

3.jpg


eta_min can be set as low as zero, per the pytorch doc
 
Last edited:
  • Thinking Face
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,791
If you read and then change a variable in the line I posted above about using the CosineAnnealingLR then you will have written .py

--lr_scheduler_type "CosineAnnealingLR" --lr_scheduler_args "T_max=1" "eta_min=1e-4" "verbose=True"

And if you are a cuck then you have probably eaten pie...also I have been drinking...a lot

Before I blackou...sleep if your using koyaha you can paste the line here:

View attachment 3304862


eta_min can be set as low as zero, per the pytorch doc
Yeah I'll let this one slide since you're drunk, but don't ever call me "cuck" or anything similar again..
 
  • Haha
Reactions: felldude and DD3DD

me3

Member
Dec 31, 2016
316
708
Some consideration after 2 week of CUI experiment, for who want try it after using a1111.

First of all understanding how CUI work is the point. There are different approach from a1111 and someone get "scared" about node connection, but here and online you find good workflow for start.

Another important thing is don't think put prompt in CUI with same embedding and lora get same result's as a1111. NO!

For what i see CUI is really bad (or better, your choice) in positional argument. Positive and Negative indeed.
Moving an embedding, lora, or a simple word is a game change, so if your prompt work in a1111, don't mean work in same way on CUI.

Also balancing embedding and lora is pain in the ass, you really need experiment and find correct position and weight of every embedding/lora you push in.

I know there are many user here using CUI from almost time, and they have surely better knowledge on it, this is some consideration i found using CUI.

A little trick for consistent prompting:
Enclose your prompt and little weight it all give better result in mostly case!
For example, this prompt:

beautiful blonde (afro:1.2) woman bathing in a river, cinematic shot, (embedding:horror), other embedding

transformed is:
((beautiful blonde (afro:1.2) woman bathing in a river, cinematic shot, (embedding:horror)):1.01), other embedding

note the double "((" at strat and enclose "):1.01)" at the end, also embedding:horror is a style embedding i want applied on the image, so i enclose it, then other embedding for composition (like detailer) get out of it.

With these trick i get consistent result without washing out full composition loading embedding and lora.

Maybe you also know it, but is good to keep in mind.

And now a question: is there a way to copy group nodes from a workflow to another? i didn't find it.
There are different prompt interpreters in comfyui, one of which copies the one used in a1111.

There's also nodes that setup settings etc to be the same or close to a1111.

Another thing is that by default a1111 uses GPU for things like seed/randomness, comfy default is CPU. You can change this in both UIs and you can see there can be a very large difference between the two options. Comfy can do this on a per sampler node basis if you used the correct one.

Regarding prompting, i haven't used a1111 for XL and not checked the code, but i believe it still dumps the same prompt into both text encoders, there's people that argue this the absolute way of doing it, but if you try feeding different prompting styles or just parts of the prompt to each encoder you quickly see that you can use this to your advantage.
I really don't think ppl should lock themselves too much into one style or way of writing prompts, that will quickly turn into the mess ppl just keep dumping into their negatives.

Creativity is "freedom".