[Stable Diffusion] Prompt Sharing and Learning Thread

Nitwitty

Member
Nov 23, 2020
360
202
Here's the same prompt within HyperV1. Also shows the difference in upscaling. Both these have the same seed, but the 1st one is upscaled 3 times and 2nd one is slightly bigger original but upscaled twice to give the same output

View attachment 2594159

View attachment 2594160
I think I like the HyperV1 a little better even though you had to upscale. Try EdgeofRealism and see what you get with the same prompt.
 

Nitwitty

Member
Nov 23, 2020
360
202
I'm keeping an eye on the Unstable Diffusion project to see if they are able to come up with more advanced diffusion models that are not filtered for NSFW and can possibly solve the problem with hands and feet. It's rare to find AI art of women with good representation of feet and especially soles. You always have to jump through hoops to get decent results. I also would like know more about creating consistent characters and how is that accomplished. I know it's done with Controlnet and other extensions but has anybody had experience yet creating consistent output so as to create a story with a series of pictures?
 

Nitwitty

Member
Nov 23, 2020
360
202
After some searching I found (checkpoint). It looks promising for this purpose. In the sample images you can see skin defects, wrinkles and peachfuzz (vellus hair), even looks like it has skin pores.
View attachment 2592664

I'm downloading the model but what is this Facedetail Lora? Should I use both at the same time then?
 

Nitwitty

Member
Nov 23, 2020
360
202
It most certainly does - here's my attempt at generating it in one go, with a few tweaks to the settings* and CyberRealistic2.0 as the checkpoint.

*Lower res for a start! I've got 12GB of vRAM and that was still falling over.
View attachment 2593888
Does the amount of VRAM effect the quality or just the speed of iteration?
 
Last edited:

Nitwitty

Member
Nov 23, 2020
360
202
Neither, although I suppose sort of both :)WaitWhat:)
It affects how high a resolution you can go without having to use tiling, and how many separate images you can create at once.
Well until I can upgrade I'm stuck with my 4GB VRAM 1050 Ti. I'm considering upgrading to a new intel arc A770 16GB for around $340 or a used 3090 for $700. But that's a lot of money for a used card. The A770 looks more attractive price-wise. With the April 2023 drivers and OpenVino software it's supposed to perform very well for the money.
 

Nitwitty

Member
Nov 23, 2020
360
202
This is going to seem like a stupid question. But is it okay to post nude images
Can you teach me that please! lol this time don't over do it although this example isn't too bad at all even if it's over done. Can you do more examples please with full nudes at say a beach setting or something? And what are the settings needed to create good results?
And can you walk me through the installation process? It's just so much details to keep track of.
 

Jimwalrus

Well-Known Member
Sep 15, 2021
1,045
3,993
Well until I can upgrade I'm stuck with my 4GB VRAM 1050 Ti. I'm considering upgrading to a new intel arc A770 16GB for around $340 or a used 3090 for $700. But that's a lot of money for a used card. The A770 looks more attractive price-wise. With the April 2023 drivers and OpenVino software it's supposed to perform very well for the money.
If you're intending to use it for SD I would recommend going Team Green due to most of SD being written for the CUDA ecosystem. There's some work been done on getting AMD GPUs to play too, but I've not heard much about anyone making it work with Intel Arc.
Unless your Python's really good and you can apply (or even develop) some manual workarounds in which case that high vRAM will be great.
I've got an RTX 3060 12GB which takes less than a minute per high res image, decent value at ~$330
 
  • Like
  • Red Heart
Reactions: Mr-Fox and Sepheyer

Nitwitty

Member
Nov 23, 2020
360
202
If you're intending to use it for SD I would recommend going Team Green due to most of SD being written for the CUDA ecosystem. There's some work been done on getting AMD GPUs to play too, but I've not heard much about anyone making it work with Intel Arc.
Unless your Python's really good and you can apply (or even develop) some manual workarounds in which case that high vRAM will be great.
I've got an RTX 3060 12GB which takes less than a minute per high res image, decent value at ~$330
That is what OpenVino is for. Intel created is specifically so you can use the Arc gpus with stable diffusion and other AI software. That's why the ARC A770 16GB using the April 2023 drivers is suddenly very attractive at $340. I'm referring to the Acer BiFrost Arc A770 gpu by the way. Recent benchmarks with the new drivers have it very much ahead of the 3060 at the moment. Yes I was considering the 3060 12GB but it's just not there performance wise even at the cheap price. I want the most bang for my buck.
 

Nitwitty

Member
Nov 23, 2020
360
202
Okay I'm officially putting this out there....
I want to know what is the best way of converting DAZ model scene renders to something either semi-realistic or realistic in stable diffusion. I know that Controlnet is the way to go. And someone has suggested specifically using the HED and Depth models. I'm going to be experimenting trying to get the best results. And I might possibly post some examples here. But since most of you are already way ahead of me I was wondering if you could post some examples here. Before and after shots would be useful.

Thanks.
 
  • Like
Reactions: Sepheyer

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I'm downloading the model but what is this Facedetail Lora? Should I use both at the same time then?
You can use it as you want. It's a Lora focused on face details. I have not had time to do much with SD for a while so I can't make any recommendations. I simply found it by searching. You will need to make some tests yourself. I would suggest to use the same prompt and seed and try this lora with different checkpoints and different weights. It's always best to exclude as many variables as possible when testing so that you know for sure what thing is doing what etc.
 
  • Like
Reactions: Nitwitty

Nitwitty

Member
Nov 23, 2020
360
202
For the last several hours I've been learning about stable diffusion and how to convert a DAZ 3d image into something close to photo-realistic using automatic1111 and various SD checkpoints. As you can see in the before and after example, the photo realistic version is very dramatic in comparison. Although as you can see there are flaws. So it's not quite a success yet.
I just want to show you all what is possible. Credit to the developer Elsa94 and her work from the VN "Nudist School".

before
7uREj8.png

after:
00024-2489771122.png

Obviously it doesn't follow the original image exactly (especially the blue breasts). I'm still working on the technique. But imagine VNs getting upgraded in a similar manner.
 
Last edited:

KingBel

Member
Nov 12, 2017
426
3,380
For the last several hours I've been learning about stable diffusion and how to convert a DAZ 3d image into something close to photo-realistic using automatic1111 and various SD checkpoints. As you can see in the before and after example, the photo realistic version is very dramatic in comparison. Although as you can see there are flaws. So it's not quite a success yet.
I just want to show you all what is possible. Credit to the developer Elsa94 and her work from the VN "Nudist School".

before
View attachment 2595376

after:
View attachment 2595377

Obviously it doesn't follow the original image exactly (especially the blue breasts). I'm still working on the technique. But imagine VNs getting upgraded in a similar manner.
Saw your posts on Discord as well.. :) I suppose that if you were thinking of using this for a VN, you would need to be able to assure consistency for your characters. And then be able to place said characters within the scene that you have pre-layed out in DAZ. Haven't tried this out myself yet, but you can possibly train a TI/Lora on the character from your VN, and then place them in your scene using regional prompting.. not sure how you would incorporate that as well as a DAZ scene as a guide.. but it's a project worth exploring :)
 
  • Like
Reactions: Sepheyer and Mr-Fox

devilkkw

Member
Mar 17, 2021
323
1,093
I take this prompt (thank you) and change a typo(i think row=raw) for testing new model i'm working on. I'm focused on have not double at res over 768, also i've made a modified dpm++2m karras version for working at high cfg without too much aberration.

Full parameter on image(all made at Size: 896x1152 anbd cfg 19.528 )
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
I have to fix some error's on model, especially on the anime model, it look not so much anime in certain prompt( i think in prompt where raw photo is).
 

devilkkw

Member
Mar 17, 2021
323
1,093
For the last several hours I've been learning about stable diffusion and how to convert a DAZ 3d image into something close to photo-realistic using automatic1111 and various SD checkpoints. As you can see in the before and after example, the photo realistic version is very dramatic in comparison. Although as you can see there are flaws. So it's not quite a success yet.
I just want to show you all what is possible. Credit to the developer Elsa94 and her work from the VN "Nudist School".

before
View attachment 2595376

after:
View attachment 2595377

Obviously it doesn't follow the original image exactly (especially the blue breasts). I'm still working on the technique. But imagine VNs getting upgraded in a similar manner.
Worked on same idea time ago, i see you are using high denoise, so the image change a lot.
This is made using only TI without any prompt, and a negative: (poor quality:1.2), featureless, lowres, monochrome, grayscale, bad proportions, ugly, doll, plastic_doll, silicone, anime, cartoon, fake, filter, airbrush, (kkw-ph-neg3:1.002), Asian, (fake), (3D), (render), (kkw-ultra-neg:1.005)

CFG:30- Denoise 0,28.
d028.jpg

Sampler you use in this type of work is important, DPM++ 2M Karras is good for realism, sometimes Heun work better.
Another trick is made image little lower res(like 8,10%), not as same resolution you want generate, but keep the ratio, so image is little upscaled in img2img and you get more details. Remember to select upscaler for img2img in config tab.

But what is your goal, keep image consistent or something other?
 
  • Like
Reactions: Mark17 and Mr-Fox

Nitwitty

Member
Nov 23, 2020
360
202
Saw your posts on Discord as well.. :) I suppose that if you were thinking of using this for a VN, you would need to be able to assure consistency for your characters. And then be able to place said characters within the scene that you have pre-layed out in DAZ. Haven't tried this out myself yet, but you can possibly train a TI/Lora on the character from your VN, and then place them in your scene using regional prompting.. not sure how you would incorporate that as well as a DAZ scene as a guide.. but it's a project worth exploring :)
I don't think I would do it myself. I just wanted to see if it was possible. The artists here have better hardware and knowledge than me. I'm wondering if it's more efficient to use DAZ to lay out a scene then use SD to finish in a realistic render or if working to make DAZ models more realistic is better. I mean as it pertains to making VNs from scratch.

For DAZ VN conversions though it will take everything you suggested. Regional prompting with Loras and everything else.
 
  • Like
Reactions: Mr-Fox