[Stable Diffusion] Prompt Sharing and Learning Thread

Nitwitty

Member
Nov 23, 2020
360
202
Worked on same idea time ago, i see you are using high denoise, so the image change a lot.
This is made using only TI without any prompt, and a negative: (poor quality:1.2), featureless, lowres, monochrome, grayscale, bad proportions, ugly, doll, plastic_doll, silicone, anime, cartoon, fake, filter, airbrush, (kkw-ph-neg3:1.002), Asian, (fake), (3D), (render), (kkw-ultra-neg:1.005)

CFG:30- Denoise 0,28.
View attachment 2595813

Sampler you use in this type of work is important, DPM++ 2M Karras is good for realism, sometimes Heun work better.
Another trick is made image little lower res(like 8,10%), not as same resolution you want generate, but keep the ratio, so image is little upscaled in img2img and you get more details. Remember to select upscaler for img2img in config tab.

But what is your goal, keep image consistent or something other?
I tried every which way to get a good result. I had a lot of trouble with img2img at first. So I went with txt2img....then back to img2img. I had better results with lower denoise settings. Still learning what is the best method. Can't imaging the work involved with converting a regular DAZ visual novel to photorealistic. Would take weeks even with the right technique. Or certainly several days at least with batch processing with the right hardware.
 
Last edited:
  • Like
Reactions: Mr-Fox

Nitwitty

Member
Nov 23, 2020
360
202
View attachment 2595979

Trouble is when you need to redo the background and stuff like fence, you need more denoise , here need two pass and post merging.

Fast demo:

View attachment 2596005
Nice job on the background here. Another pass would have to be done for the characters as you say. Even though I can see the characters were updated slightly. This is all a lot of work I'm learning lol.
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Nice job on the background here. Another pass would have to be done for the characters as you say. Even though I can see the characters were updated slightly. This is all a lot of work I'm learning lol.
Don't forget to read the guides and tutorials and tips in this thread in your learning. You can find links on first page and also search my own and sepheyers name and also Jimwalrus and Schlongborn.
Devilkkw gives also a lot of good tips in various posts. Apologies if I forgot someone.
 

ZephyrionPW3D

New Member
May 4, 2023
5
3
View attachment 2595979

Trouble is when you need to redo the background and stuff like fence, you need more denoise , here need two pass and post merging.

Fast demo:

View attachment 2596005
Great results! Actually inpainting the bodies/faces separately might give better results. (with only inpainting the masked area is selected) Also adding ControlNet (like canny) might help to retain proper details.
 
  • Like
Reactions: Mr-Fox and Nitwitty

Nitwitty

Member
Nov 23, 2020
360
202
Great results! Actually inpainting the bodies/faces separately might give better results. (with only inpainting the masked area is selected) Also adding ControlNet (like canny) might help to retain proper details.
The results I got were with txt2img using the controlnet softedge HED model. Then I experimented with various checkpoints. I got good results with HyperV1 and EdgeofRealism.
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
The results I got were with txt2img using the controlnet softedge HED model. Then I experimented with various checkpoints. I got good results with HyperV1 and EdgeofRealism.
Something I don't think has been mentioned here is that some checkpoints have inpaint versions. Meaning that the checkpoint have also an inpaint merge. On civitai on each checkpoint you can see different versions, then look for inpaint versions. Sometimes there are instruct versions as well, these are for pix2pix.
 
  • Like
Reactions: Sepheyer

Nitwitty

Member
Nov 23, 2020
360
202
Something I don't think has been mentioned here is that some checkpoints have inpaint versions. Meaning that the checkpoint have also an inpaint merge. On civitai on each checkpoint you can see different versions, then look for inpaint versions. Sometimes there are instruct versions as well, these are for pix2pix.
For now, just experimenting to see what the potential of SD and AI art in general. I'll get more serious once I can afford to get a new card. With the dismal condition of the GPU market right now it's hard to figure out which card I should get. It's a pathetic mess because of Nvidia's greed. But eventually I will get a new card and I will learn about inpainting then. It takes a while to render just one picture on a 4GB 1050 Ti that I currently have.
 
  • Like
Reactions: Sepheyer and Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
For now, just experimenting to see what the potential of SD and AI art in general. I'll get more serious once I can afford to get a new card. With the dismal condition of the GPU market right now it's hard to figure out which card I should get. It's a pathetic mess because of Nvidia's greed. But eventually I will get a new card and I will learn about inpainting then. It takes a while to render just one picture on a 4GB 1050 Ti that I currently have.
Until then look into hiresfix with tiling upscale and other upscaling with tiling. There are extensions that have "tiled vae", it can help low vram when upscaling.

Also use argument "--lowvram" and "--xformer" in webui-user.bat .
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,575
3,776
I'll ask here as I don't recall if there is a novel-AI thread. I am probably self-repeating this question, but the first time around didn't really do much.

As of today, would anyone know of a competent desktop-based AI capable of doing adult novels or at least two-three page long scenes?

I used (Character Assistant one) but the devs changed how it self-sensors and the dialog just gets cleaned the moment things get spicy. I either have to turn on video capture or keep pressing print-screen to see what the AI wrote before it self-deleted.

There are these various chat AI but they don't self-direct to produce an actual novel. Here is the kind of novel that Character Assistant slaps together in a jiffy:
You don't have permission to view the spoiler content. Log in or register now.
I know there are "open source" desctop-based models hitting the market but I just didn't get to them. So, if anyone has actual experience getting a bot to write an adult novel, please stand up.
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
Just art for the money! I need it and it's upgrade from my old Rysen+3070
What test
See how high resolution you can generate with SD and also how fast is the generation etc. What is the limit for this card with SD.
 
  • Like
Reactions: Sepheyer

Elefy

Member
Jan 4, 2022
239
923
00018-1771487417.png

20min from first txt2img try to final tile upscale. Multiple trials and errors :D

for testing:
Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 4248416645, Size: 1366x2048, Model hash: 4199bcdd14, Model: revAnimated_v122
Time taken: 1m 5s
Torch active/reserved: 13227/16938 MiB, Sys VRAM: 20536/24564 MiB (83.6%)

No module 'xformers' just stock A1111

00083-367040277.png

Steps: 30, Sampler: Euler a ,1024x1024 txt2img - Time taken: 13.35s

Never use Hi ress. and don't know to set it right :rolleyes:
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,804
View attachment 2600558

20min from first txt2img try to final tile upscale. Multiple trials and errors :D

for testing:
Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 4248416645, Size: 1366x2048, Model hash: 4199bcdd14, Model: revAnimated_v122
Time taken: 1m 5s
Torch active/reserved: 13227/16938 MiB, Sys VRAM: 20536/24564 MiB (83.6%)

No module 'xformers' just stock A1111

View attachment 2600605

Steps: 30, Sampler: Euler a ,1024x1024 txt2img - Time taken: 13.35s

Never use Hi ress. and don't know to set it right :rolleyes:
I have made many posts about how to use hires fix
https://f95zone.to/search/269515972/?q=hires+fix&t=post&c[thread]=146036&c[users]=Mr-Fox&o=relevance.
Select "hires fix" of course, use 20-30sample steps,
use 2x hires steps in relation to sample steps (40-60) to keep the composition, use ap 0.3 denoising strength.
The sampler and upscaler makes a huge difference. I use either DPM++SDE Karras or DPM++M2 Karras
and , cfg 8-10 but you need to figure this one out for yourself depending on the prompt and concept.
Most of the time I use postprocessing GFPGAN and often in combination with face restoration GFPGAN but not always.
Then the resolution. This is where you will find the limit of the card. First use a sample resolution less than 1024,
in most cases it reduces multiple characters and conjoined twins and other monstrosities,
use as high "upscaled by" (hires upscaling) as your card will allow.
When you get cuda error message you have found the limit, reduce it slightly to consistently avoid the error.
These settings gives me consistently very high image quality results.
Of course the prompt is what determines the quality of
the composition and the concept in combination with the cfg scale and denoising strength.
 
Last edited:

KingBel

Member
Nov 12, 2017
429
3,493
Try adding "featureless, colourless" to your negative.. not specific to the above render.. but makes a difference to all renders, especially if you want a bit more detail
 
  • Like
Reactions: Jimwalrus