anyone else gettin annoyed with Civitai now? this mess is throughout the pages. on the side, at the bottom, at the top, a pop up, mixed with the models results, etc etc..gotta keep blocking with ublock..annoying
What they are doing is extremely expensive. The computing power to train. The storage of checkpoints. The bandwidth.
It's an incredibly useful site and it's all really well done. I gave 5 bucks yesterday, not because I needed to purchase anything, but just as a contribution.
What they are doing is extremely expensive. The computing power to train. The storage of checkpoints. The bandwidth.
It's an incredibly useful site and it's all really well done. I gave 5 bucks yesterday, not because I needed to purchase anything, but just as a contribution.
anyone else gettin annoyed with Civitai now? this mess is throughout the pages. on the side, at the bottom, at the top, a pop up, mixed with the models results, etc etc..gotta keep blocking with ublock..annoying
Tensorart has contacted me a few times I guess it is good to have an alternate if civitai changes to much.
(I assume they will get sued if they start to monetize to much.)
They (Tensorart) didn't impress me with the we train on a 4090, I would have figured any of these sights should be running a NVIDIA RTX 6000 ADA workstation. (Assuming the person wasn't lying about being an operator for tensorart)
I'm guessing the sites are just paying for computing power from google colab or something.
anyone else gettin annoyed with Civitai now? this mess is throughout the pages. on the side, at the bottom, at the top, a pop up, mixed with the models results, etc etc..gotta keep blocking with ublock..annoying
Great detail, I did rank it down from 128 to 12.
One of the lora's I just worked on is good for telling it where to paint that detail I used a 60/40 blend with your ranked down lora to make this image.
A1111 Guide. Using xyz plot script for comparison tests.
Get in the habit of using xyz plot script to run comparison tests to see the results from different checkpoint models, samplers, the cfg scale, sample steps etc.
In order to find what you are after faster.
Work on the prompt first with a ckpt model you often use.
Use general standard settings.
When you have a prompt that is near finished (only minor tweaks left), run xyz plot script for your tests with a good static seed.
xyz plot will run your prompt against all the ckpt models you choose for example and then create an image grid.
You can run a test with more than one variable ofc but I recommend to only do one at a time.
In this order:
1. Checkpoint Model.
2. Sampler.
3. CFG Scale.
4. Sample Steps.
I choose Photon_v1 and proceed with the Sampler test.
I often come back to good old Euler a.
Next up is CGF Scale.
You can either type out each value like this: 4,5,6, etc but who has the time or patience..
Instead set a range and the number of images in square brackets like this "4-12 [5]" (results in increments of 2: 4,6,8,10,12).
You can now run a more fine test with a smaller range, "8-10 [5]" (results in increments of 0.5).
Lets go with CFG Scale 9.5 .
Next test is the sample steps. How many is it gonna be? "20-60 [5]" (will result in increments of 10).
Ok let's test a smaller range now, "25-40 [4]" (will result in increments of 5).
Decisions decisions..
We can of course continue with even finer test and smaller range. Yeah lets make it easier for our self and avoid guessing.
"26-34 [5]" (will result in increments of 2).
Ok 28 it is.
Now we can make some last fine adjustments to the prompt before upscaling.
Let's get rid of that giraffe neck and put some color on those cheeks while we're at it.
Updates for the prompt: (blush) Neg: (long neck).
I'm inpainting her eyes in img2img with after detailer and then use SD Upscale Script to upscale
I pick 4x_NMKD-Siax_175k because it will give back a little warmth to the image compared to 4x_NMKD-Siax_200k .
I could use xyz plot script again for the upscaler but this post is long enough already.
Also lets give her a fitting name. How's Ruby?
It turns out I had an error somewhere along the way while making the guide in my last post. I haven no idea when it happened. I will try to go back through the steps and see if I get a different result now after the reboot.
Here's hoping that I don't have to remake this whole thing..
To be honest -- even if you had made a mistake along the way, it's still a great guide and I think everyone gets the steps you're doing to get the optimal result! Great guide, bookmarked! :-D
And btw: if you know a decent "Custom Node" which would let me do something like that in ComfyUI (like selecting multiple checkpoints, running the same prompt, getting a comparison picture as a result; same for different samplers and CFG values etc.) that would be awesome. It's something I always wanted to do more easily, right now I'm doing this kind of manual and creating those images by pasting single images into a larger picture one by one -- which is OK to get an overview, but not great if you want to do this for a new prompt.
To be honest -- even if you had made a mistake along the way, it's still a great guide and I think everyone gets the steps you're doing to get the optimal result! Great guide, bookmarked! :-D
And btw: if you know a decent "Custom Node" which would let me do something like that in ComfyUI (like selecting multiple checkpoints, running the same prompt, getting a comparison picture as a result; same for different samplers and CFG values etc.) that would be awesome. It's something I always wanted to do more easily, right now I'm doing this kind of manual and creating those images by pasting single images into a larger picture one by one -- which is OK to get an overview, but not great if you want to do this for a new prompt.
No unfortunately I'm a stubborn boneheaded type that has refused so far to give in to the spagetti lord almighty. Sepheyer is however well deep into that particular "sect" and can probably help you out, me3 might be the grand masters holy prophet himself and can probably provide some guidance on the path to pasta kingdom too..
To be honest -- even if you had made a mistake along the way, it's still a great guide and I think everyone gets the steps you're doing to get the optimal result! Great guide, bookmarked! :-D
And btw: if you know a decent "Custom Node" which would let me do something like that in ComfyUI (like selecting multiple checkpoints, running the same prompt, getting a comparison picture as a result; same for different samplers and CFG values etc.) that would be awesome. It's something I always wanted to do more easily, right now I'm doing this kind of manual and creating those images by pasting single images into a larger picture one by one -- which is OK to get an overview, but not great if you want to do this for a new prompt.
I trust you know there are a few grid nods, but indeed they are all pretty crappy for a variety of reasons. If you don't know that there are grid nods at all, let us know and we'll hook you up.
No unfortunately I'm a stubborn boneheaded type that has refused so far to give in to the spagetti lord almighty. Sepheyer is however well deep into that particular "sect" and can probably help you out, me3 might be the grand masters holy prophet himself and can probably provide some guidance on the path to pasta kingdom too..
Some consideration after 2 week of CUI experiment, for who want try it after using a1111.
First of all understanding how CUI work is the point. There are different approach from a1111 and someone get "scared" about node connection, but here and online you find good workflow for start.
Another important thing is don't think put prompt in CUI with same embedding and lora get same result's as a1111. NO!
For what i see CUI is really bad (or better, your choice) in positional argument. Positive and Negative indeed.
Moving an embedding, lora, or a simple word is a game change, so if your prompt work in a1111, don't mean work in same way on CUI.
Also balancing embedding and lora is pain in the ass, you really need experiment and find correct position and weight of every embedding/lora you push in.
I know there are many user here using CUI from almost time, and they have surely better knowledge on it, this is some consideration i found using CUI.
A little trick for consistent prompting:
Enclose your prompt and little weight it all give better result in mostly case!
For example, this prompt:
beautiful blonde (afro:1.2) woman bathing in a river, cinematic shot, (embedding:horror), other embedding
transformed is: ((beautiful blonde (afro:1.2) woman bathing in a river, cinematic shot, (embedding:horror)):1.01), other embedding
note the double "((" at strat and enclose "):1.01)" at the end, also embedding:horror is a style embedding i want applied on the image, so i enclose it, then other embedding for composition (like detailer) get out of it.
With these trick i get consistent result without washing out full composition loading embedding and lora.
Maybe you also know it, but is good to keep in mind.
And now a question: is there a way to copy group nodes from a workflow to another? i didn't find it.
Some consideration after 2 week of CUI experiment, for who want try it after using a1111.
First of all understanding how CUI work is the point. There are different approach from a1111 and someone get "scared" about node connection, but here and online you find good workflow for start.
Another important thing is don't think put prompt in CUI with same embedding and lora get same result's as a1111. NO!
For what i see CUI is really bad (or better, your choice) in positional argument. Positive and Negative indeed.
Moving an embedding, lora, or a simple word is a game change, so if your prompt work in a1111, don't mean work in same way on CUI.
Also balancing embedding and lora is pain in the ass, you really need experiment and find correct position and weight of every embedding/lora you push in.
I know there are many user here using CUI from almost time, and they have surely better knowledge on it, this is some consideration i found using CUI.
A little trick for consistent prompting:
Enclose your prompt and little weight it all give better result in mostly case!
For example, this prompt:
beautiful blonde (afro:1.2) woman bathing in a river, cinematic shot, (embedding:horror), other embedding
transformed is: ((beautiful blonde (afro:1.2) woman bathing in a river, cinematic shot, (embedding:horror)):1.01), other embedding
note the double "((" at strat and enclose "):1.01)" at the end, also embedding:horror is a style embedding i want applied on the image, so i enclose it, then other embedding for composition (like detailer) get out of it.
With these trick i get consistent result without washing out full composition loading embedding and lora.
Maybe you also know it, but is good to keep in mind.
And now a question: is there a way to copy group nodes from a workflow to another? i didn't find it.
There are different prompt interpreters in comfyui, one of which copies the one used in a1111.
There's also nodes that setup settings etc to be the same or close to a1111.
Another thing is that by default a1111 uses GPU for things like seed/randomness, comfy default is CPU. You can change this in both UIs and you can see there can be a very large difference between the two options. Comfy can do this on a per sampler node basis if you used the correct one.
Regarding prompting, i haven't used a1111 for XL and not checked the code, but i believe it still dumps the same prompt into both text encoders, there's people that argue this the absolute way of doing it, but if you try feeding different prompting styles or just parts of the prompt to each encoder you quickly see that you can use this to your advantage.
I really don't think ppl should lock themselves too much into one style or way of writing prompts, that will quickly turn into the mess ppl just keep dumping into their negatives.