[Stable Diffusion] Prompt Sharing and Learning Thread

Kaseijin

Active Member
Jul 9, 2022
590
1,000
It's impossible without the meta data. We can only guess and speculate. Or you can message the poster and ask pretty please with sugar on top.. Insta account is dead fyi.




Nice work!

Check this user out too


 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
I may have mentioned this before.. but there is an upscaler called SkinDiffDetails which is excellent at enhancing and maintaining skin details and imperfections. On this wiki (under the Skin category):
NMKD has also face focused upscaler and other's .
 

Nitwitty

Member
Nov 23, 2020
354
196
01552-_hyperV1_v10_DPM++ 2M Karras_S-30_C-7_3872753096.png

Has anybody checked out the Hyper V1 checkpoint yet? It's excellent for photorealistic models. You can find it on Civitai. Anyways, I'd like to know where we are on creating consistent models if this technology is going to replace DAZ for making visual novels. Also, how do we address the issue of hands and feet. For me, I'm more concerned about feet and soles (heh heh).
 
Last edited:

Nitwitty

Member
Nov 23, 2020
354
196

Nitwitty

Member
Nov 23, 2020
354
196
Here's the same prompt within HyperV1. Also shows the difference in upscaling. Both these have the same seed, but the 1st one is upscaled 3 times and 2nd one is slightly bigger original but upscaled twice to give the same output

View attachment 2594159

View attachment 2594160
I think I like the HyperV1 a little better even though you had to upscale. Try EdgeofRealism and see what you get with the same prompt.
 

Nitwitty

Member
Nov 23, 2020
354
196
I'm keeping an eye on the Unstable Diffusion project to see if they are able to come up with more advanced diffusion models that are not filtered for NSFW and can possibly solve the problem with hands and feet. It's rare to find AI art of women with good representation of feet and especially soles. You always have to jump through hoops to get decent results. I also would like know more about creating consistent characters and how is that accomplished. I know it's done with Controlnet and other extensions but has anybody had experience yet creating consistent output so as to create a story with a series of pictures?
 

Nitwitty

Member
Nov 23, 2020
354
196
After some searching I found (checkpoint). It looks promising for this purpose. In the sample images you can see skin defects, wrinkles and peachfuzz (vellus hair), even looks like it has skin pores.
View attachment 2592664

I'm downloading the model but what is this Facedetail Lora? Should I use both at the same time then?
 

Nitwitty

Member
Nov 23, 2020
354
196
It most certainly does - here's my attempt at generating it in one go, with a few tweaks to the settings* and CyberRealistic2.0 as the checkpoint.

*Lower res for a start! I've got 12GB of vRAM and that was still falling over.
View attachment 2593888
Does the amount of VRAM effect the quality or just the speed of iteration?
 
Last edited:

Nitwitty

Member
Nov 23, 2020
354
196
Neither, although I suppose sort of both :)WaitWhat:)
It affects how high a resolution you can go without having to use tiling, and how many separate images you can create at once.
Well until I can upgrade I'm stuck with my 4GB VRAM 1050 Ti. I'm considering upgrading to a new intel arc A770 16GB for around $340 or a used 3090 for $700. But that's a lot of money for a used card. The A770 looks more attractive price-wise. With the April 2023 drivers and OpenVino software it's supposed to perform very well for the money.
 

Nitwitty

Member
Nov 23, 2020
354
196
This is going to seem like a stupid question. But is it okay to post nude images
Can you teach me that please! lol this time don't over do it although this example isn't too bad at all even if it's over done. Can you do more examples please with full nudes at say a beach setting or something? And what are the settings needed to create good results?
And can you walk me through the installation process? It's just so much details to keep track of.
 

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
Well until I can upgrade I'm stuck with my 4GB VRAM 1050 Ti. I'm considering upgrading to a new intel arc A770 16GB for around $340 or a used 3090 for $700. But that's a lot of money for a used card. The A770 looks more attractive price-wise. With the April 2023 drivers and OpenVino software it's supposed to perform very well for the money.
If you're intending to use it for SD I would recommend going Team Green due to most of SD being written for the CUDA ecosystem. There's some work been done on getting AMD GPUs to play too, but I've not heard much about anyone making it work with Intel Arc.
Unless your Python's really good and you can apply (or even develop) some manual workarounds in which case that high vRAM will be great.
I've got an RTX 3060 12GB which takes less than a minute per high res image, decent value at ~$330
 
  • Like
  • Red Heart
Reactions: Mr-Fox and Sepheyer

Nitwitty

Member
Nov 23, 2020
354
196
If you're intending to use it for SD I would recommend going Team Green due to most of SD being written for the CUDA ecosystem. There's some work been done on getting AMD GPUs to play too, but I've not heard much about anyone making it work with Intel Arc.
Unless your Python's really good and you can apply (or even develop) some manual workarounds in which case that high vRAM will be great.
I've got an RTX 3060 12GB which takes less than a minute per high res image, decent value at ~$330
That is what OpenVino is for. Intel created is specifically so you can use the Arc gpus with stable diffusion and other AI software. That's why the ARC A770 16GB using the April 2023 drivers is suddenly very attractive at $340. I'm referring to the Acer BiFrost Arc A770 gpu by the way. Recent benchmarks with the new drivers have it very much ahead of the 3060 at the moment. Yes I was considering the 3060 12GB but it's just not there performance wise even at the cheap price. I want the most bang for my buck.
 

Nitwitty

Member
Nov 23, 2020
354
196
Okay I'm officially putting this out there....
I want to know what is the best way of converting DAZ model scene renders to something either semi-realistic or realistic in stable diffusion. I know that Controlnet is the way to go. And someone has suggested specifically using the HED and Depth models. I'm going to be experimenting trying to get the best results. And I might possibly post some examples here. But since most of you are already way ahead of me I was wondering if you could post some examples here. Before and after shots would be useful.

Thanks.
 
  • Like
Reactions: Sepheyer