[Stable Diffusion] Prompt Sharing and Learning Thread

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
I'm going to test training sdxl on pornographic concept by using color association to ease the formation of the neural network. I don't know anything about that but I assume it creates associations so it should work.

Essentially, I'll separate the image into two identical images, then color specific regions. Then I'll prompt what the colors are associated to. The AI knows what the colors are, so it will associate it with the concept to learn.

Here are some example images.

View attachment 3270379

View attachment 3270385


View attachment 3270388
That's very interesting. Let us know how it turns out. :) (y)
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
sorry for the noob question, i have an intel card and i know that you can use auto 1111 with open vino. but what about comfy UI?

Also other recommendations for beggkners with intel cards?
The most important aspect is the amount of Vram. The card needs to have at the absolute minimum of 4GB for generating images. The chip itself can't be slow as snails either. There are many settings and small things you can do if you suffer from low Vram.
You can add the argument "--low vram" to the "webui-user.bat" file. In the ui itself you can set different settings in the "lightest" mode. Then keeping the resolution low when generating and low amount of steps etc to begin with until you know what you can achieve with your card. Start with 512x512 or in portrait mode (2:3) you can go below 512, such as 344x512. Then use SD Upscale Script in img2img to make the image larger. Since this is using "tiling" you can upscale by 4x. This will get you to 1376x2048. Keep it to an easier style or genre such as anime or manga. With easier I only mean in terms of hardware requirement.

Example:
You don't have permission to view the spoiler content. Log in or register now.

344x512Upscaled to 1376x2048
00001-1940226755.png 00003-1940226755.png

If your card can't hack it then google colab or Stable pod type services might be the option for you. It's online server based image generation. On some sites you can rent a high end cards by the hour. This means there is nothing stopping you from training your own models etc. As long as you are willing to pay the hourly fee.
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Another example for low Vram ppl.

This is also a challenge for anyone who wants to participate.
The point of the challenge is to be more creative with the prompt and come up with new innovative solutions within specified
limitations and without the usual toys. The basic idea is to emulate the challenge people have with an old weak GPU.
This is why we keep the resolution low and avoid using a bunch of extension in txt2img.
It's meant to be a learning exercise first and foremost and no competition.
No one will lynch you if you take small liberties but it's more fun if everyone try to stick to the "script".:)

The limitations:

- In txt2img.
Use low res 344x512 or 512x512.
No controlnet or after detailer etc and no roop or reactor.
Postprocessing is allowed.
Face restore is allowed if you really want to use it.
keep the prompt simple and under 90 tokens and no more than 2 loras or embeddings in total, preferably none.
You can choose any genre and concept, nude or SFW.

- In img2img.
you are free to use inpaint as much as you wish and after detailer in the interest of fixing hands or deformed details etc.
Maybe I'm wrong but I think it's less memory demanding when you already have an image to work with.
keep the prompt in after detailer somewhat simple also.
The same limit of loras and/or embedding (2) for after detailer as txt2img.
no controlnet, roop or reactor.
Use SD Upscaler Script with any upscaler you want and 2-4x to finalize the image.

Post both the image from txt2img and the final image from img2img so we can see the prompt and process.
Give a short description outlining the process and the general concept.
Also share any thoughts or reflections about things you might have discovered and learned.

The challenge will continue as long as someone is still interested.

Remember to have fun.
-------------------------------------------------------------------------------------------------------------------------------------------

In txt2img:
I will expand the prompt a little from before and see what I can achieve within these limits. I avoid using any extensions that will add to the memory demand, such as controlnet or after detailer etc in txt2img. I only use postprocessing GFPGAN as I don't think it is very demanding.

You don't have permission to view the spoiler content. Log in or register now.

In img2img:
I use after detailer for fixes and enhancing.
Lately I have experimented with using an alternative ckpt for after detailer with very interesting results.
I had to fix a tiny detail on the thumbs fingernail with photoshop for the first image.
A little "cheating" has never hurt anyone has it?..:giggle:
Then I turn off postprocessing GFPGAN and all model in after detailer with the exception of eyes before upscaling.
I upscale with SD Upscale Script with UltraSharp at 4x to finalize my image.

You don't have permission to view the spoiler content. Log in or register now.

344x512Upscaled to 1376x2048
00032-2421455531.png 00072-2421455531.png
00037-2421455531.png 00074-2421455531.png
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
Another example for low Vram ppl.

This is also a challenge for anyone who wants to participate.
The point of the challenge is to be more creative with the prompt and come up with new innovative solutions within specified
limitations and without the usual toys. The basic idea is to emulate the challenge people have with an old weak GPU.
This is why we keep the resolution low and avoid using a bunch of extension in txt2img.
It's meant to be a learning exercise first and foremost and no competition.
No one will lynch you if you take small liberties but it's more fun if everyone try to stick to the "script".:)

The limitations:

- In txt2img.
Use low res 344x512 or 512x512.
No controlnet or after detailer etc and no roop or reactor.
Postprocessing is allowed.
Face restore is allowed if you really want to use it.
keep the prompt simple and under 90 tokens and no more than 2 loras or embeddings in total, preferably none.
You can choose any genre and concept, nude or SFW.

- In img2img.
you are free to use inpaint as much as you wish and after detailer in the interest of fixing hands or deformed details etc.
Maybe I'm wrong but I think it's less memory demanding when you already have an image to work with.
keep the prompt in after detailer somewhat simple also.
The same limit of loras and/or embedding (2) for after detailer as txt2img.
no controlnet, roop or reactor.
Use SD Upscaler Script with any upscaler you want and 2-4x to finalize the image.

Post both the image from txt2img and the final image from img2img so we can see the prompt and process.
Give a short description outlining the process and the general concept.
Also share any thoughts or reflections about things you might have discovered and learned.

The challenge will continue as long as someone is still interested.

Remember to have fun.
-------------------------------------------------------------------------------------------------------------------------------------------

In txt2img:
I will expand the prompt a little from before and see what I can achieve within these limits. I avoid using any extensions that will add to the memory demand, such as controlnet or after detailer etc in txt2img. I only use postprocessing GFPGAN as I don't think it is very demanding.

You don't have permission to view the spoiler content. Log in or register now.

In img2img:
I use after detailer for fixes and enhancing.
Lately I have experimented with using an alternative ckpt for after detailer with very interesting results.
I had to fix a tiny detail on the thumbs fingernail with photoshop for the first image.
A little "cheating" has never hurt anyone has it?..:giggle:
Then I turn off postprocessing GFPGAN and all model in after detailer with the exception of eyes before upscaling.
I upscale with SD Upscale Script with UltraSharp at 4x to finalize my image.

You don't have permission to view the spoiler content. Log in or register now.

OK, does this meet the rules? Didn't go above 4.2GB of vRAM and then only briefly.
344x512, with HiRes Fix to 1.05 (which used no extra vRAM)
You don't have permission to view the spoiler content. Log in or register now.
00138-447803216.png

Upscaled 2x using 4xNMKDSuperscale to 720x1072
Tiny bit of extra GFPGAN (0.01)
No other post-processing.
02026.png

The vRAM could almost undoubtedly be reduced further using '--low vram'

Could I do better with a bit more time? Probably! But yeah, like some other things in life, it's not how big it is, it's what you do with it that counts.
 
  • Like
Reactions: devilkkw

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
OK, does this meet the rules? Didn't go above 4.2GB of vRAM and then only briefly.
344x512, with HiRes Fix to 1.05 (which used no extra vRAM)
You don't have permission to view the spoiler content. Log in or register now.
View attachment 3271323

Upscaled 2x using 4xNMKDSuperscale to 720x1072
Tiny bit of extra GFPGAN (0.01)
No other post-processing.
View attachment 3271321

The vRAM could almost undoubtedly be reduced further using '--low vram'

Could I do better with a bit more time? Probably! But yeah, like some other things in life, it's not how big it is, it's what you do with it that counts.
Yes. Excellent. I love it.
The "rules" are not hacked into stone, more like guidelines. The interesting part is to see what you guys can come up with without relying on memory demanding extensions and keeping it low resolution and trying to be creative and inventive. I had the idea when trying to give advice to lobotomist the guy with an intel card. What would it be like? And what could we achieve with those kind of limitations?
:)(y)
 
Last edited:

lobotomist

Active Member
Sep 4, 2017
908
879
The most important aspect is the amount of Vram. The card needs to have at the absolute minimum of 4GB for generating images. The chip itself can't be slow as snails either. There are many settings and small things you can do if you suffer from low Vram.
You can add the argument "--low vram" to the "webui-user.bat" file. In the ui itself you can set different settings in the "lightest" mode. Then keeping the resolution low when generating and low amount of steps etc to begin with until you know what you can achieve with your card. Start with 512x512 or in portrait mode (2:3) you can go below 512, such as 344x512. Then use SD Upscale Script in img2img to make the image larger. Since this is using "tiling" you can upscale by 4x. This will get you to 1376x2048. Keep it to an easier style or genre such as anime or manga. With easier I only mean in terms of hardware requirement.

Example:
You don't have permission to view the spoiler content. Log in or register now.

344x512Upscaled to 1376x2048
View attachment 3271011 View attachment 3271012

If your card can't hack it then google colab or Stable pod type services might be the option for you. It's online server based image generation. On some sites you can rent a high end cards by the hour. This means there is nothing stopping you from training your own models etc. As long as you are willing to pay the hourly fee.
did you quote me by mistake?
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
did you quote me by mistake?
No. I made the post for you. I generated example images so you can see what you might be able to do if you have at least 4GB of Vram on your intel card. If your card can't do SD then there are online sites that let you use their computer and you can rent a high end graphics card. "Inspired" by this I even started a challenge to create images with your scenario in mind.
 

lobotomist

Active Member
Sep 4, 2017
908
879
No. I made the post for you. I generated example images so you can see what you might be able to do if you have at least 4GB of Vram on your intel card. If your card can't do SD then there are online sites that let you use their computer and you can rent a high end graphics card. "Inspired" by this I even started a challenge to create images with your scenario in mind.
even the cheapest intel card that nobody buys has 4gb... you couldn't even take a second to google vram on intel cards before writing a huge wall of text? thanks i guess..
i have 8gb on my a750 which is pretty capable of doing stable diffusion, my question was mostly because I don't know if all those plugins like comfy ui are nvidia only.

Oh also the other most common intel card the a770 has 16gb of vram
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
even the cheapest intel card that nobody buys has 4gb...
i have 8gb on my a750 which is pretty capable of doing stable diffusion, my question was mostly because I don't know if all those plugins like comfy ui are nvidia only.
Oh also the other most common intel card the a770 has 16gb of vram
Don't expect anyone to be able to read your mind. If you want to know something specific then spit it out.

you couldn't even take a second to google vram on intel cards before writing a huge wall of text? thanks i guess..
Why so pissy when people are just trying to be helpful? It's not any ones job to help you but we do it anyways.
Don't expect anyone to fall over themselves to answer you in the future if this is how you respond.
 

lobotomist

Active Member
Sep 4, 2017
908
879
Don't expect anyone to be able to read your mind. If you want to know something specific then spit it out.



Why so pissy when people are just trying to be helpful? It's not any ones job to help you but we do it anyways.
Don't expect anyone to fall over themselves to answer you in the future if this is how you respond.
Look sorry for being so "pissy", I usually are more grateful when someone tries to help me.
 
  • Like
Reactions: Mr-Fox

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,776
Here's an interesting case that I was never able to replicate.

The most striking feature here is the skin - but this is achieved incidentally. The aDetailer grabbed the entire figure as hand and retextured it.

Anyone knows how to do this consistently?

a_00315_.png
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,053
4,035
Here's an interesting case that I was never able to replicate.

The most striking feature here is the skin - but this is achieved incidentally. The aDetailer grabbed the entire figure as hand and retextured it.

Anyone knows how to do this consistently?

View attachment 3273399
Maybe try reducing the 'Detection model confidence threshold'? Or use one of the Person options in ADetailer?
 
  • Like
Reactions: Mr-Fox and Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
sorry for the noob question, i have an intel card and i know that you can use auto 1111 with open vino. but what about comfy UI?

Also other recommendations for beggkners with intel cards?
So far we have learned that you are not a complete beginner and your card can do SD A1111 at least. I don't use ComyUi but as far as I have heard from the users, it is kinder on the GPU and better on memory. This means that you can in confidence get ComfyUi to try it out. I can't help or give you tips about the plugins or extensions. Sepheyer and me3 as well as a few others knows this stuff much better. If you ask them nicely I'm sure they will help you. Olivio Sarikas has a video tutorial series for ComfyUi beginners you can also check out

Learn ComfyUi Part one:


Olivio Sarikas YT Channel:
 
Last edited:
  • Like
Reactions: DD3DD and Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Here's an interesting case that I was never able to replicate.

The most striking feature here is the skin - but this is achieved incidentally. The aDetailer grabbed the entire figure as hand and retextured it.

Anyone knows how to do this consistently?

View attachment 3273399
I love this pose. If you figure something out for the skin let us know. :) (y)
 
  • Like
Reactions: Sepheyer

me3

Member
Dec 31, 2016
316
708
When i was preping images for posting to this little "low vram" challenge i noticed that the node i'd been using didn't save with any of the meta data..prompt nor workflow. So poor old "Gandalf" here and a bunch of other images i need to try and remember wtf i used as prompt etc :/
PB-_temp_djgnc_00003_.png

So the only image i can really enter with atm is what i currently still have open.
The image itself should contain the workflow, but noise uses gpu so there will be a slight difference unless you got the exact same as me.
The "workflow" image is just a screengrap for those that don't use comfy or don't want to load it up.

low_0001.png

workflow(2).jpg

I think this should comply with showing each step/part of the creation at least.
 

Synalon

Member
Jan 31, 2022
225
665
I'm having a similar issue in automatic1111, seems to happen if I have after detailer active, although it will also randomly save into a log folder as well.
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
When i was preping images for posting to this little "low vram" challenge i noticed that the node i'd been using didn't save with any of the meta data..prompt nor workflow. So poor old "Gandalf" here and a bunch of other images i need to try and remember wtf i used as prompt etc :/
View attachment 3278996

So the only image i can really enter with atm is what i currently still have open.
The image itself should contain the workflow, but noise uses gpu so there will be a slight difference unless you got the exact same as me.
The "workflow" image is just a screengrap for those that don't use comfy or don't want to load it up.

View attachment 3278997

View attachment 3278995

I think this should comply with showing each step/part of the creation at least.
Love the Gandalf. All heroes doesn't wear a red cape.. Some have an awesome grey beard.. :love: Oh and the lady looks very interesting. Very artistic and slightly abstract. I love the general concept and that it is consistent through the image. It's like she is stepping through a portal or a timewarp yet she has a tribal look rather than futuristic. Or maybe it is the thought and imagination bubbel of me3 she is emerging from. Nice images. :) (y)
 
Last edited: