CREATE YOUR AI CUM SLUT ON CANDY.AI TRY FOR FREE
x

[Stable Diffusion] Prompt Sharing and Learning Thread

Falmino

New Member
Dec 27, 2022
8
1
What would the PC specs would it need for me to run Stable Diffusion? I have 16gb Ram and Gt1030 (2gb Vram)?
 

hkennereth

Member
Mar 3, 2019
239
784
Would it still be possible for me to do stable diffusion? If not is there any great alternatives?
There are a ton of online solutions for generating images with Stable Diffusion, where you can get started with the tech without needing any hardware of your own besides a browser. Some are free, some are paid, some are more limited than others, but if you're just getting started most of these should be good enough to learn the ropes:






 
  • Like
Reactions: ChupakabraOo

sharlotte

Member
Jan 10, 2019
322
1,731
Has anyone tried it yet or even seen this in the past few days?

Claims that speeds up to 2x the generation speed with automatic1111:

Going to try this today and see if that helps.
Note of caution however from github: 1697859543629.png



When you install, install from URL using the above link to github. Once done, restart webui and then go to new tab and follow the steps.

More info is available here on the install:


Here on how to use with SDXL: (not super helpful unless you want to join the patreon of the guy for x$ a month....) - Trying to find out how to do it another way - from the video, it sounds like this RT is not fully incorporated yet hence only 1.5 is supported in the main branch of automatic1111.



NB: a word of caution, i'm getting CUDA issues probably due to last version of CUDA not installed. Still installing and will edit/update the steps here as I do so. was not CUDA at all, see below workaround.
NB2: At the moment, install is not completing even with newest CUDA installed. Will persevere ... and then report here
NB3: quite a few bugs on the github page. I'd say wait a few days before trying to install unless you are feeling adventurous. Which i'm ... Boldly going on, as Captain Picard would, with the install :)

There appears to be quite a few users getting errors when trying to install (I was one). Solution that worked for me is detailed in one of the bugs in the github:
1697867107662.png
 
Last edited:

sharlotte

Member
Jan 10, 2019
322
1,731
Alright, so after a few hours, I still can't get SDXL on it so i guess i'll wait a bit. It does not support comfyUI at the moment. Also seems not to support controlnet and hires. So for the lovers of curvy beauties here, far from ideal.

It does go really much faster on SD1.5 but then the current limitations are probably not worthy (at least for me).
 

Falmino

New Member
Dec 27, 2022
8
1
There are a ton of online solutions for generating images with Stable Diffusion, where you can get started with the tech without needing any hardware of your own besides a browser. Some are free, some are paid, some are more limited than others, but if you're just getting started most of these should be good enough to learn the ropes:






Browser AI image generation is a little limited, but is there an alternative to Stable Diffusion, that I can download and use offline? And what happens if I force my potato PC to operate Stable Diffusion?
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,581
3,804
Browser AI image generation is a little limited, but is there an alternative to Stable Diffusion, that I can download and use offline? And what happens if I force my potato PC to operate Stable Diffusion?
Welp, just do it and see what happens. I think the most optimized UI for desktop is ComfyUI. The "readme" says:

Works even if you don't have a GPU with: --cpu (slow)

So give it a try:

Just make sure to add that "--cpu" to the launch command.
 
  • Like
Reactions: Falmino

hkennereth

Member
Mar 3, 2019
239
784
And what happens if I force my potato PC to operate Stable Diffusion?
With the specs you described, my guess to what will happen is "not much", as in "it will not run", or if you are able to make it run (by turning off GPU rendering and using CPU instead, for example), at best it will be so incredibly slow you'll give up attempting to use it in a couple of days; even the best CPU takes between 10 to 30 minutes to render a single image. But as Sepheyer pointed out, you are free to try.

Browser AI image generation is a little limited, but is there an alternative to Stable Diffusion, that I can download and use offline?
Yeah, it's limited, but not as much as you imagine. All the alternatives I suggested ARE Stable Diffusion, just running on a server instead of on your machine, some of them even use exactly the same software you would otherwise install locally on your machine. Those are the limitations one must accept when their options are limited, there is no magical way to make a hardware hungry application like AI image generation run on an old machine.
 
Last edited:

felldude

Active Member
Aug 26, 2017
572
1,702
Browser AI image generation is a little limited, but is there an alternative to Stable Diffusion, that I can download and use offline? And what happens if I force my potato PC to operate Stable Diffusion?
It's entirely possible to generate images locally in 10-30 seconds (Or a batch of 50 in seconds with a 10k processors)
Intel guide.

But most builds are optimized to do BF16 math, and that is a no go for processors.
Search for OpenVINO and IPEX or hope someone makes a CPU optimized build like was done for AMD with direct-ML and Oynx

on a processor *Its a $10k processor but it was used for training)

It might be worth a shot to try COMFY cpu only with this build

" python -m pip install intel_extension_for_pytorch -f "
 

devilkkw

Member
Mar 17, 2021
330
1,118
I've got an RTX3060 12GB, so I don't have to worry as much about it hitting the vRAM limit and failing over into system RAM (and yeah, that sure as hell slows it down!). Unfortunately A1111 is so poorly optimised that it's running a couple of GB in vRAM even when idling. I don't think it's the Checkpoint as even when I've loaded a 7GB one the vRAM shows as ~2GB.
what's about driver version?
 
  • Like
Reactions: Jimwalrus

Jimwalrus

Well-Known Member
Sep 15, 2021
1,058
4,042
what's about driver version?
Last but one I think, whichever that is. Had a pop-up for a new one the other day but ignored it.
It's definitely one of the newer drivers that will spill over into system RAM if vRAM is full.
 

devilkkw

Member
Mar 17, 2021
330
1,118
what slow generation is pushing out image, in a1111. For what i see generation is fast, but when is the moment to pushing out,it slow down. this is why i'm on 532.03 driver.
But with 12Gb seem strange you have those problem.
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,058
4,042
what slow generation is pushing out image, in a1111. For what i see generation is fast, but when is the moment to pushing out,it slow down. this is why i'm on 532.03 driver.
But with 12Gb seem strange you have those problem.
"Slow" may be relative. Just over 3mins for an image at ~900x~1300 resolution, doing all the upscaling in HiRes fix.
Dunno, maybe I'm greedy and want lots of images immediately!!
 
  • Like
Reactions: devilkkw

me3

Member
Dec 31, 2016
316
708
What would the PC specs would it need for me to run Stable Diffusion? I have 16gb Ram and Gt1030 (2gb Vram)?
Since i started out and occasionally still use a 1050 2gb, yes it's possible to run both a1111 and comfyui on just 2gb vram, but you'll need patience.
The 1030 seems to have slower memory than the 1050 so times will likely be slower for you, also i'm running it on Ubuntu so not sure how it would work on Windows.
The thing is, you'd probably need a second device of some kind able to run a browser and use that to access the "UI" remotely and just run the SD setup on the 1030 comp. Reason is that having the browser run on the same comp will eat up vram and more than likely freeze things, specially during model loading. In my case just without a running browser window Pytorch gets about 1.5-1.7gb to work with, just having a browser running drops that down to almost 1.1gb.

So both a1111 and comfyui need --listen as launch argument, so they can be accessed over a network.
A1111 uses about 45-55s to generate a 512x512 image at 20 steps, comfyui uses about 35-40. (if using a tiled vae node and min tile size, comfyui can at least get to width+height about 1500px without OOM, possibly higher, 1m 20sec at 20steps)
A1111 really doesn't like you checking models, no joke, it easily takes 15-20min, doesn't matter if it's 2gb or 6gb, time is pretty much the same, so with that you pick a model stick with it.
 
  • Like
Reactions: Sepheyer

Falmino

New Member
Dec 27, 2022
8
1
It's entirely possible to generate images locally in 10-30 seconds (Or a batch of 50 in seconds with a 10k processors)
Intel guide.
Thanks but I don't understand the guide and I'm not that knowledgable on PCspecs and hardware
Also thanks Sepheyer and hkennereth.

Also I am just experimenting with AI made art specifically one I can use as a software app rather than the browser because limitations and paywall. I also value the time between the AI can generate image because I believe AI generation of image is more or less trial and error on the prompt. I don't really have a good pc so I guess I'm just stuck at using browser, unless something can be done, thanks though. :)
 

felldude

Active Member
Aug 26, 2017
572
1,702
Thanks but I don't understand the guide and I'm not that knowledgable on PCspecs and hardware
Also thanks Sepheyer and hkennereth.

Also I am just experimenting with AI made art specifically one I can use as a software app rather than the browser because limitations and paywall. I also value the time between the AI can generate image because I believe AI generation of image is more or less trial and error on the prompt. I don't really have a good pc so I guess I'm just stuck at using browser, unless something can be done, thanks though. :)
The short version is you could have a setup using any that could generate images in seconds locally. (Some of those processors are 150-300 dollars US)

You could try this , It might speed up generation even with a 3rd generation processor as they are made to handle FP16 and FP32 math faster then the BF16 used for graphics cards.

Unfortunately the IPEX builds are all for Linux
 
Last edited:
  • Like
Reactions: devilkkw

Synalon

Member
Jan 31, 2022
225
665
Random Halloween picture idea I had, it was supposed to be on a gothic castle balcony at night. Its not bad but its also not what I wanted.

If anybody else wants to edit it feel free. I was thinking of turning it into a landscape style but I don't have the patience.

I've added a large version I upscaled in zip format since it was to large for F95.

Halloween Small.jpg
 

Falmino

New Member
Dec 27, 2022
8
1
The short version is you could have a setup using any that could generate images in seconds locally. (Some of those processors are 150-300 dollars US)

You could try this , It might speed up generation even with a 3rd generation processor as they are made to handle FP16 and FP32 math faster then the BF16 used for graphics cards.

Unfortunately the IPEX builds are all for Linux
Ummm I dunno about that, but my CPU is an Intel i5-8400 2.80Ghz