[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
If you go to civitai and search the image page you can find some amazing animations. Sometimes they have said something about how they created it in the comments. This was made with the help of ebsynth. It's a standalone software but there is apparently some form of extension for SD as well. You can find it in the internal extension database.
Ebsynth or the extension has the ability to mask the background so you can make it static and achieve a more stable background according to op of the example video.

20231125-234344_with_snd_apo8_prob3.gif
(only a sample gif, follow the link for the full video.)

" you can mask background using ebsynth. Inside ebsynth utility, configuration / etc tab, mask mode set to normal. This is default setting in ebsynth.

Install ebsynth utility from extension tab. "

source:




------------------------------------------------------------------------------------------------------------------------------------------

Here's a video2video guide on civitai I forgot to include in my last post about creating animations and videos with SD
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
I had a go at replicating "render to real" that i had in a1111, and tbh i think i got something very wrong with a bunch of controlnet nodes...anyway
This is done just with prompt from wd14, i added a negative prompt to "disallow" renders/cgi etc, but that it. Rest is just AI doing a bunch of stuff, i'd hoped to make it a simple "drop an image and hit go", but unfortunately it seems things are far more touchy than hoped. Strengths seem to be very impacted by model so atm it's gonna need quite a bit of tweaking. Different models put different things on the shirt, but they all agree it has to be green for some annoying reason and at one point the AI decided that grass was a bad thing and turned it all to concrete :p
Seems there's a long road to go still...

View attachment 3140685
You can potentially use inpainting to fix the tops print. Sebastian kamph has a video tutorial about controlling text with the help of controlnet inpainting. I don't see why it could not be used for a "graphic design" or a print.


If all else fails you can edit it afterwards with photoshop or photopea etc. I know it's not as satisfying as having SD do it for you.
 
Last edited:

me3

Member
Dec 31, 2016
316
708
If you go to civitai and search the image page you can find some amazing animations. Sometimes they have said something about how they created it in the comments. This was made with the help of ebsynth. It's a standalone software but there is apparently some form of extension for SD as well. You can find it in the internal extension database.
Ebsynth or the extension has the ability to mask the background so you can make it static and achieve a more stable background according to op of the example video.

View attachment 3141774
(only a sample gif, follow the link for the full video.)

" you can mask background using ebsynth. Inside ebsynth utility, configuration / etc tab, mask mode set to normal. This is default setting in ebsynth.

Install ebsynth utility from extension tab. "

source:




------------------------------------------------------------------------------------------------------------------------------------------

Here's a video2video guide on civitai I forgot to include in my last post about creating animations and videos with SD
The "problem" i've found so far is that the animating stuff requires you to do most/all of the frames in a single batch. So for me there's a problem when the smallest batching you can do is 16, it needs a minimum of 8gb VRAM to work "well". So even if it's just a tiny overflow it drastically increases processing time. In some cases it hits >700 s/it...so it'll take you hours just todo one batch which can then give you 1-2 sec clip, if you're lucky.
You can't even work on your prompt with single images, because of how the things work it won't generate anything unless it's at a batch minimum (16), you just get barely altered noise. I was perfectly aware i'd be fighting an uphill battle before starting, but that shouldn't stop ppl from doing things. Time and digging will tell, i'm sure there are or will be solutions that work better eventually
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
The "problem" i've found so far is that the animating stuff requires you to do most/all of the frames in a single batch. So for me there's a problem when the smallest batching you can do is 16, it needs a minimum of 8gb VRAM to work "well". So even if it's just a tiny overflow it drastically increases processing time. In some cases it hits >700 s/it...so it'll take you hours just todo one batch which can then give you 1-2 sec clip, if you're lucky.
You can't even work on your prompt with single images, because of how the things work it won't generate anything unless it's at a batch minimum (16), you just get barely altered noise. I was perfectly aware i'd be fighting an uphill battle before starting, but that shouldn't stop ppl from doing things. Time and digging will tell, i'm sure there are or will be solutions that work better eventually
I agree.

Things are moving very fast though in terms pf progress. Just look how far we have come in such short time in regards to simple text2img. I think it's a good and sobering exercise to go back to the first pages in this thread and see the amazing progress. I'm excited for what the next year will bring.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
You're looking for something like a interogation node. That'll analyze the image for you and create something you can use as prompt. Double click on the background in Comfyui and search for wd14. You might need to install a nodepack called Comfyui WD 1.4 Tagger, should be easy to find in the manager.
Great call, this WD14 bro is a major time saver.
 
  • Like
Reactions: devilkkw

Vanaduke

Active Member
Oct 27, 2017
751
3,121
Hi, I've worked with DAZ to generate 3D images of the mother from Dual Family by Gumdrops (see signature). I'm now interested in generating AI art of the same model.

Basically, BloomingPrince inspired me to one day generate a nude version of his creation:

IMG_8215.png

Does this application require heavy setup? I only have my old laptop (Intel core) with me which I used for generating 3D images of said model. Thanks.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
Hi, I've worked with DAZ to generate 3D images of the mother from Dual Family by Gumdrops (see signature). I'm now interested in generating AI art of the same model.

Basically, BloomingPrince inspired me to one day generate a nude version of his creation:

View attachment 3145710

Does this application require heavy setup? I only have my old laptop (Intel core) with me which I used for generating 3D images of said model. Thanks.
A warning -- running ComfyUI or its alternative on a CPU will be slow and will probably limit you to 512x512 renders. I haven't tested this, speaking as someone who has an ~1650 nvidia with 6gb that even such potatoe card is taking prohibitively long.

Now, you absolutely can run the ComfyUI on a CPU. The installation and the models will require about 10gb.

Here is the link to get you started:

Note the line: "Works even if you don't have a GPU with: --cpu (slow)"
 

theMickey_

Engaged Member
Mar 19, 2020
2,248
2,938
Quick question: 2-3 days ago, when I started with ComfyUI, I also played a bit with "sd_xl_turbo", the model that can render AI images almost in "real time" (in case you haven't heard of it yet, check ). It was generating new pictures literally while I was typing. Today I wanted to do the same, and using the exact same workflow, it now takes ~3-4 seconds to render a single image. Anyone noticed this as well? And what could have changed its behavior?

Again, I'm using the same workflow and settings shown in the video. But now the "CLIP Text encode (Prompt)" stays active for about 1 second after changing the text, and the "SampleCustom" then takes another 2-3 seconds to actually sample the image (I've replaced the "SamplerCustom" with a simple KSampler, still the same). I'm so confused...

I did install a couple custom_nodes (like IPAdapter, ControlNet etc., just some "basic" stuff), could that have an impact on how things work? Those nodes are not part of my workflow though. There also was an update for my NVIDIA driver, so I'm wondering if that could have any impact.

I'd love to figure that out, because I assume that if something simple as the sd_xl_turbo workflow got like 10-20 times worse, something (anything?) else might be affected as well.

Any advice would be highly appreciated!


P.S.: I might install a "naked" ComfyUI with just the ComyUI-Manager in a second folder and try that Turbo workflow without any additional custom nodes to see if that still works.

// EDIT: a fresh installation (even without the Manager) did fix the issue. FML, now I'm going to need to find out which of the "custom nodes" broke my other installation :cautious:
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
Quick question: 2-3 days ago, when I started with ComfyUI, I also played a bit with "sd_xl_turbo", the model that can render AI images almost in "real time" (in case you haven't heard of it yet, check ). It was generating new pictures literally while I was typing. Today I wanted to do the same, and using the exact same workflow, it now takes ~3-4 seconds to render a single image. Anyone noticed this as well? And what could have changed its behavior?

Again, I'm using the same workflow and settings shown in the video. But now the "CLIP Text encode (Prompt)" stays active for about 1 second after changing the text, and the "SampleCustom" then takes another 2-3 seconds to actually sample the image (I've replaced the "SamplerCustom" with a simple KSampler, still the same). I'm so confused...

I did install a couple custom_nodes (like IPAdapter, ControlNet etc., just some "basic" stuff), could that have an impact on how things work? Those nodes are not part of my workflow though. There also was an update for my NVIDIA driver, so I'm wondering if that could have any impact.

I'd love to figure that out, because I assume that if something simple as the sd_xl_turbo workflow got like 10-20 times worse, something (anything?) else might be affected as well.

Any advice would be highly appreciated!


P.S.: I might install a "naked" ComfyUI with just the ComyUI-Manager in a second folder and try that Turbo workflow without any additional custom nodes to see if that still works.

// EDIT: a fresh installation (even without the Manager) did fix the issue. FML, now I'm going to need to find out which of the "custom nodes" broke my other installation :cautious:
You probably know, the one thing that truly messes up comfy is the wrong tensor model. I did that once and spent three month in limbo.
 
  • Like
Reactions: DD3DD

theMickey_

Engaged Member
Mar 19, 2020
2,248
2,938
You probably know, the one thing that truly messes up comfy is the wrong tensor model.
I assume so, and I double-checked whether or not I had the right model/checkpoint selected ("sd_xl_turbo_1.0.safetensors" for the Turbo thingy), which I downloaded through the manager.

Other models (which are not loaded in the workflow) shouldn't interfere, right?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,778
I assume so, and I double-checked whether or not I had the right model/checkpoint selected ("sd_xl_turbo_1.0.safetensors" for the Turbo thingy), which I downloaded through the manager.

Other models (which are not loaded in the workflow) shouldn't interfere, right?
My bad, I meant the torch installation. FML can't even talk straight. Sorry, disregard what I wrote above. I think what happened with the custom nodes is one of the essentials, such as torch or some such got updated to a later version. You are lucky you rolled back alright. A messed up comfy takes fuckton of time to cure :(
 

theMickey_

Engaged Member
Mar 19, 2020
2,248
2,938
I meant the torch installation.
Thanks for clarifying that :)

What I will do for now: save all my models, checkpoints, loras, workflows and whatnot outside of the ComfyUI installation folder and rather work with links or something like that (it seems you can also use some config files to specify paths to those things, but I doubt it'll work for everything). Same for "input" and "output" folders". That way I will have two main folders:
  • 1st folder: ComfyUI with just "nodes" and stuff
  • 2nd folder: all the models, checkpoints etc. as well as input and output folders
Then I should be able to backup my (still working) ComfyUI base folder every once in a while before I do major updates or install new nodes.
 
  • Like
Reactions: Sepheyer and DD3DD

Vanaduke

Active Member
Oct 27, 2017
751
3,121
A warning -- running ComfyUI or its alternative on a CPU will be slow and will probably limit you to 512x512 renders. I haven't tested this, speaking as someone who has an ~1650 nvidia with 6gb that even such potatoe card is taking prohibitively long.

Now, you absolutely can run the ComfyUI on a CPU. The installation and the models will require about 10gb.

Here is the link to get you started:

Note the line: "Works even if you don't have a GPU with: --cpu (slow)"
The guy who made was able to do it on his smartphone, I was hoping that is possible as well? He informed me he only had to submit 20 or more photos of the model from all angles, then he was able to generate said AI art afterwards. Problem is, the guy is now inactive after the forum moderators deleted his AI art posts. I've been trying to communicate with him ever since to no avail.
 
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
The guy who made was able to do it on his smartphone, I was hoping that is possible as well? He informed me he only had to submit 20 or more photos of the model from all angles, then he was able to generate said AI art afterwards. Problem is, the guy is now inactive after the forum moderators deleted his AI art posts. I've been trying to communicate with him ever since to no avail.
He was talking about training a Lora. It's not possible to do on a smartphone obviously, since it requires a gpu with at least 6GB Vram. If you don't have a pc or laptop with enough performance the next alternative is to use one of the cloud based services that exist. You will pay per hour and in some places you can rent a very powerful gpu. I don't know much about running SD on a cloud service so I can't give you more than general info and links.

one of the many services that exist:
There are other alternatives as well.

A tutorial video about cloud based Stable Diffusion:
There are many more guides and tutorials.

A short list of stable diffusion youtubers who creates guides on the regular:




 

felldude

Active Member
Aug 26, 2017
572
1,701
The guy who made was able to do it on his smartphone, I was hoping that is possible as well? He informed me he only had to submit 20 or more photos of the model from all angles, then he was able to generate said AI art afterwards. Problem is, the guy is now inactive after the forum moderators deleted his AI art posts. I've been trying to communicate with him ever since to no avail.
He probably submitted to a google colab as mentioned.

Their are some great processor only trainings that surpass BF16 performance but it requires a $1600 processor and is all command line still.

Civitai has bounty's for training lora's or even offers to train loras for about 500 points
(50 per day for liking an image and 25 for posting you could get the points needed in about a week)

Alternately you could post your data set here as a zip and see if anyone would do it.
 

rogue_69

Newbie
Nov 9, 2021
87
298
This probably isn't the right thread, but I wanted to get a conversation going with people who actually use A.I.

If you google questions like "Do gamers care if the art is A.I. generated, you get a majority of the responses from artists (or so it seems). I understand them hating the concept of A.I. art. To them, the art part of making a game is the part that should get the most effort. If the creator used A.I. art, they are lazy, even if they spent a ton of time on story, programming, etc. Using A.I. art is theft (there is a fair point here, but that will change as things advance). You can't get consistent characters (that is rapidly becoming easier).

What I'm wondering, is if the average adult game consumer really cares how the art was created, or if that is a talking point from artists who don't want to lose gigs in the future. Players have different things they deem to be the most important part of the game. For some it is art. For some it is story. I personally feel that starting next year, most people downloading these types of games won't give a rat's *ss if the art of these games was A.I. generated (or more likely, A.I. being used somewhere in the process).
 
  • Like
Reactions: Sepheyer