[Stable Diffusion] Prompt Sharing and Learning Thread

Sepheyer

Well-Known Member
Dec 21, 2020
1,527
3,597
Quick question: 2-3 days ago, when I started with ComfyUI, I also played a bit with "sd_xl_turbo", the model that can render AI images almost in "real time" (in case you haven't heard of it yet, check ). It was generating new pictures literally while I was typing. Today I wanted to do the same, and using the exact same workflow, it now takes ~3-4 seconds to render a single image. Anyone noticed this as well? And what could have changed its behavior?

Again, I'm using the same workflow and settings shown in the video. But now the "CLIP Text encode (Prompt)" stays active for about 1 second after changing the text, and the "SampleCustom" then takes another 2-3 seconds to actually sample the image (I've replaced the "SamplerCustom" with a simple KSampler, still the same). I'm so confused...

I did install a couple custom_nodes (like IPAdapter, ControlNet etc., just some "basic" stuff), could that have an impact on how things work? Those nodes are not part of my workflow though. There also was an update for my NVIDIA driver, so I'm wondering if that could have any impact.

I'd love to figure that out, because I assume that if something simple as the sd_xl_turbo workflow got like 10-20 times worse, something (anything?) else might be affected as well.

Any advice would be highly appreciated!


P.S.: I might install a "naked" ComfyUI with just the ComyUI-Manager in a second folder and try that Turbo workflow without any additional custom nodes to see if that still works.

// EDIT: a fresh installation (even without the Manager) did fix the issue. FML, now I'm going to need to find out which of the "custom nodes" broke my other installation :cautious:
You probably know, the one thing that truly messes up comfy is the wrong tensor model. I did that once and spent three month in limbo.
 
  • Like
Reactions: DD3DD

theMickey_

Engaged Member
Mar 19, 2020
2,115
2,653
You probably know, the one thing that truly messes up comfy is the wrong tensor model.
I assume so, and I double-checked whether or not I had the right model/checkpoint selected ("sd_xl_turbo_1.0.safetensors" for the Turbo thingy), which I downloaded through the manager.

Other models (which are not loaded in the workflow) shouldn't interfere, right?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,527
3,597
I assume so, and I double-checked whether or not I had the right model/checkpoint selected ("sd_xl_turbo_1.0.safetensors" for the Turbo thingy), which I downloaded through the manager.

Other models (which are not loaded in the workflow) shouldn't interfere, right?
My bad, I meant the torch installation. FML can't even talk straight. Sorry, disregard what I wrote above. I think what happened with the custom nodes is one of the essentials, such as torch or some such got updated to a later version. You are lucky you rolled back alright. A messed up comfy takes fuckton of time to cure :(
 

theMickey_

Engaged Member
Mar 19, 2020
2,115
2,653
I meant the torch installation.
Thanks for clarifying that :)

What I will do for now: save all my models, checkpoints, loras, workflows and whatnot outside of the ComfyUI installation folder and rather work with links or something like that (it seems you can also use some config files to specify paths to those things, but I doubt it'll work for everything). Same for "input" and "output" folders". That way I will have two main folders:
  • 1st folder: ComfyUI with just "nodes" and stuff
  • 2nd folder: all the models, checkpoints etc. as well as input and output folders
Then I should be able to backup my (still working) ComfyUI base folder every once in a while before I do major updates or install new nodes.
 
  • Like
Reactions: Sepheyer and DD3DD

Vanaduke

Active Member
Oct 27, 2017
694
3,038
A warning -- running ComfyUI or its alternative on a CPU will be slow and will probably limit you to 512x512 renders. I haven't tested this, speaking as someone who has an ~1650 nvidia with 6gb that even such potatoe card is taking prohibitively long.

Now, you absolutely can run the ComfyUI on a CPU. The installation and the models will require about 10gb.

Here is the link to get you started:

Note the line: "Works even if you don't have a GPU with: --cpu (slow)"
The guy who made was able to do it on his smartphone, I was hoping that is possible as well? He informed me he only had to submit 20 or more photos of the model from all angles, then he was able to generate said AI art afterwards. Problem is, the guy is now inactive after the forum moderators deleted his AI art posts. I've been trying to communicate with him ever since to no avail.
 
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,793
The guy who made was able to do it on his smartphone, I was hoping that is possible as well? He informed me he only had to submit 20 or more photos of the model from all angles, then he was able to generate said AI art afterwards. Problem is, the guy is now inactive after the forum moderators deleted his AI art posts. I've been trying to communicate with him ever since to no avail.
He was talking about training a Lora. It's not possible to do on a smartphone obviously, since it requires a gpu with at least 6GB Vram. If you don't have a pc or laptop with enough performance the next alternative is to use one of the cloud based services that exist. You will pay per hour and in some places you can rent a very powerful gpu. I don't know much about running SD on a cloud service so I can't give you more than general info and links.

one of the many services that exist:
There are other alternatives as well.

A tutorial video about cloud based Stable Diffusion:
There are many more guides and tutorials.

A short list of stable diffusion youtubers who creates guides on the regular:




 

felldude

Active Member
Aug 26, 2017
505
1,500
The guy who made was able to do it on his smartphone, I was hoping that is possible as well? He informed me he only had to submit 20 or more photos of the model from all angles, then he was able to generate said AI art afterwards. Problem is, the guy is now inactive after the forum moderators deleted his AI art posts. I've been trying to communicate with him ever since to no avail.
He probably submitted to a google colab as mentioned.

Their are some great processor only trainings that surpass BF16 performance but it requires a $1600 processor and is all command line still.

Civitai has bounty's for training lora's or even offers to train loras for about 500 points
(50 per day for liking an image and 25 for posting you could get the points needed in about a week)

Alternately you could post your data set here as a zip and see if anyone would do it.
 

rogue_69

Newbie
Nov 9, 2021
79
246
This probably isn't the right thread, but I wanted to get a conversation going with people who actually use A.I.

If you google questions like "Do gamers care if the art is A.I. generated, you get a majority of the responses from artists (or so it seems). I understand them hating the concept of A.I. art. To them, the art part of making a game is the part that should get the most effort. If the creator used A.I. art, they are lazy, even if they spent a ton of time on story, programming, etc. Using A.I. art is theft (there is a fair point here, but that will change as things advance). You can't get consistent characters (that is rapidly becoming easier).

What I'm wondering, is if the average adult game consumer really cares how the art was created, or if that is a talking point from artists who don't want to lose gigs in the future. Players have different things they deem to be the most important part of the game. For some it is art. For some it is story. I personally feel that starting next year, most people downloading these types of games won't give a rat's *ss if the art of these games was A.I. generated (or more likely, A.I. being used somewhere in the process).
 
  • Like
Reactions: Sepheyer

Jimwalrus

Active Member
Sep 15, 2021
891
3,288
This probably isn't the right thread, but I wanted to get a conversation going with people who actually use A.I.

If you google questions like "Do gamers care if the art is A.I. generated, you get a majority of the responses from artists (or so it seems). I understand them hating the concept of A.I. art. To them, the art part of making a game is the part that should get the most effort. If the creator used A.I. art, they are lazy, even if they spent a ton of time on story, programming, etc. Using A.I. art is theft (there is a fair point here, but that will change as things advance). You can't get consistent characters (that is rapidly becoming easier).

What I'm wondering, is if the average adult game consumer really cares how the art was created, or if that is a talking point from artists who don't want to lose gigs in the future. Players have different things they deem to be the most important part of the game. For some it is art. For some it is story. I personally feel that starting next year, most people downloading these types of games won't give a rat's *ss if the art of these games was A.I. generated (or more likely, A.I. being used somewhere in the process).
For me, the quality improvements of AI-generated imagery over DAZ/drawn images are the real clincher.
For legit AAA titles it may be a somewhat different matter, but for the cottage industry of porn games it'll be an absolute game changer as soon as someone gets it right.
It's really just another tool like DAZ, Photoshop etc and therefore quality is still not guaranteed - there's no shortage of current porn games with poor drawings or lazy 3D renders.
Give it a few years and there may be even more cookie-cutter AI generated VNs than DAZ ones!
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,527
3,597
For me, the quality improvements of AI-generated imagery over DAZ/drawn images are the real clincher.
For legit AAA titles it may be a somewhat different matter, but for the cottage industry of porn games it'll be an absolute game changer as soon as someone gets it right.
It's really just another tool like DAZ, Photoshop etc and therefore quality is still not guaranteed - there's no shortage of current porn games with poor drawings or lazy 3D renders.
Give it a few years and there may be even more cookie-cutter AI generated VNs than DAZ ones!
Yea, the consistent subjects is where it is at. The moment that problem is solved, everything will turn AI overnight.
 

rogue_69

Newbie
Nov 9, 2021
79
246
Yea, the consistent subjects is where it is at. The moment that problem is solved, everything will turn AI overnight.
For me, the Daz to Stable Diffusion workflow gives the best consistency. No matter what you do, Daz renders always have the "plastic" look, but with a denoiser of about 0.35 you can take a really good render, and turn it into something great, while also keeping things consistent. Consistency in clothing is the biggest problem, but if you just render the characters nude, and then use a Canvas render of the clothing, you can just overlay the clothes and keep them consistent.
 
  • Like
Reactions: Jimwalrus

hkennereth

Member
Mar 3, 2019
228
740
Yea, the consistent subjects is where it is at. The moment that problem is solved, everything will turn AI overnight.
I mean, it's not that hard. You can get reasonably consistent characters with the technique I posted about before (describing a mix of multiple existing people), and if you need them more consistent, you can train a lora for that person.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,527
3,597
For me, the Daz to Stable Diffusion workflow gives the best consistency. No matter what you do, Daz renders always have the "plastic" look, but with a denoiser of about 0.35 you can take a really good render, and turn it into something great, while also keeping things consistent. Consistency in clothing is the biggest problem, but if you just render the characters nude, and then use a Canvas render of the clothing, you can just overlay the clothes and keep them consistent.
I love my i2i workflow too but the shortcomings are deep and prohibitive to fix. Between sidefaces and fingers and nipples and hair the issues are just too numerous to provide for consistent characters. Naturally, I can fix any of those flaws but the time dropped into it makes the whole things useless for mass-production.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,527
3,597
I mean, it's not that hard. You can get reasonably consistent characters with the technique I posted about before (describing a mix of multiple existing people), and if you need them more consistent, you can train a lora for that person.
We probably understand word "consistent" somewhat differently. I haven't seen a single consistent solution, although a lot of people call the very same solutions as providing consistent results. My bar for consistent is prolly a tad higher rendering the AI useless for the purpose of ~90% of the asset design.
 

theMickey_

Engaged Member
Mar 19, 2020
2,115
2,653
Quick question (another one, I know): I'm struggling with getting some decent "photo realistic" pictures with ComfyUI, and I think I'm probably still missing something in my workflow. When I look at some "models" and "LoRAs" posted on , I am able to see all the details they (apparently) used to create said picture. Positive prompt, negative prompt, cfg and seed values as well at the sampler/scheduler used.

But when I try to reproduce some of them, my results are way off from what is posted on ComfyUI.

Example: I've been trying to reproduce something like , but no matter what workflow I'm trying to create (using the same checkpoints and LoRAs as posted in the link), the images generated are blurry (especially when upscaled) and don't look realistic at all.

Are these images posted on post-processed through something like Photoshop?
What additional (essential) nodes do I need to add to my workflow to make the results more "crisp" and realistic?

This is what my current (basic) workflow looks like:
workflow.png

There's a couple of bypassed nodes which I was experimenting with, but without success.

This is what I want to achieve (all credits for this image goes to on !):
ba808960-cf03-40d0-8069-c4f30449adcf.jpeg

but this is what I get (using the exact same checkpoint, LoRA, prompts, cfg and seed values as well as the same sampler/scheduler and a square image):
ComfyUI_temp_sjhzv_00165_.png

Any help would be much appreciated!
 
  • Like
Reactions: Sepheyer

hkennereth

Member
Mar 3, 2019
228
740
We probably understand word "consistent" somewhat differently. I haven't seen a single consistent solution, although a lot of people call the very same solutions as providing consistent results. My bar for consistent is prolly a tad higher rendering the AI useless for the purpose of ~90% of the asset design.
Well, I did say "reasonably" :)

But you're not wrong, the thing is that AI tends to add a lot of variation even if it knows exactly what you are asking for, and you are very specific with your prompting. While the image quality has been growing quickly in the last year or so, the bar for absolute consistency is still very far because that's simply not what the tech was designed to accomplish. Even the best LoRA and Dreambooth models I have ever seen will still change face shape a bit, suddenly modify eye colors, etc.

And as rogue_69 said, truly consistent clothing is near impossible to achieve with Stable Diffusion, again that's not what the tech was designed to achieve. We might still be at least a couple of generations away from being able to create a character dressed in some specific clothing and have that character rendered in different poses, environments, and sizes. But one can get close "enough" for the average use case, I think.
 

me3

Member
Dec 31, 2016
316
708
This probably isn't the right thread, but I wanted to get a conversation going with people who actually use A.I.

If you google questions like "Do gamers care if the art is A.I. generated, you get a majority of the responses from artists (or so it seems). I understand them hating the concept of A.I. art. To them, the art part of making a game is the part that should get the most effort. If the creator used A.I. art, they are lazy, even if they spent a ton of time on story, programming, etc. Using A.I. art is theft (there is a fair point here, but that will change as things advance). You can't get consistent characters (that is rapidly becoming easier).

What I'm wondering, is if the average adult game consumer really cares how the art was created, or if that is a talking point from artists who don't want to lose gigs in the future. Players have different things they deem to be the most important part of the game. For some it is art. For some it is story. I personally feel that starting next year, most people downloading these types of games won't give a rat's *ss if the art of these games was A.I. generated (or more likely, A.I. being used somewhere in the process).
It's a tool like many others and as with those tools there's gonna be ppl making low quality shit and others that make "works of art". So using it for original contents shouldn't be any different than any other "tool".
As for "artists" worried about their paychecks, just look at all the concerns that have been with mass production, or "automation". Yes there's less need for ppl in massive numbers, however those that's actually good and like doing the job in question has found ways to make quite a lot of money. Consider how much money is being paid for "handcrafted" or "costume jobs" in things like woodworking or cars/bikes. If you make/do something ppl are actually interested in, someone will be interested in paying for it, regardless of there being a "cheap and mass produced" option and you'll probably be able to charge more for it (eventually).
Quality is what matters, far more than the tool, but considering the "quality" of a lot of games, movies, etc over the past few years, i don't think AI is the biggest concern in that regard...that's a whole other issue though