[Stable Diffusion] Prompt Sharing and Learning Thread

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
And again suuuuper weird stuff. Got this one:
View attachment 3001083

Now I wanted to upscale it, denoiser is set to 0,2 and same seed of course, this is what's currently in the making:

View attachment 3001084


Why do I get so wildly different stuff out?
Very odd.
Your first one should have only given you the same image, simply upscaled (denoising was set to 0)

The second should only have made very small changes as your denoising was set to 0.02, not the 0.2 you stated.

I'll take a look at the gen data in PNGInfo later this afternoon when I get a chance to fire up my PC, maybe run some tests, see what's going on.
 
  • Like
Reactions: Fuchsschweif

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
And again suuuuper weird stuff. Got this one:
View attachment 3001083

Now I wanted to upscale it, denoiser is set to 0,2 and same seed of course, this is what's currently in the making:

View attachment 3001084


Why do I get so wildly different stuff out?
OK, for the second one first:

The original image was already upscaled to 1024x1024, but using None as Upscaler, with a Denoising strength of 0.7. It comes out with the first image, but when you reran it you used completely different settings. For that re-run you were generating at 1024x1024 to begin with, then upscaling it to 2048x2048 using 4xUltrasharp.
The way SD works is to treat each block of 512x512 (or part thereof) as one image then stitch them together. So your second image is actually 4 generations at once, stuck together as best it can. No wonder it's an eldritch horror!

What you want to do is generate at 512x512 (or 512xwhatever to give a more appropriate aspect ratio for the subject) without ANY HiResFix at all.

Once again: Run a load of initial gens without HiResFix, select the seeds you like the best (and prompts if you're tweaking those too), then rerun them from scratch with the settings as previously recommended: i.e. Original resolution (512xWhatever), ESRGAN_4x as your Upscaler, Denoising strength at 0.2 to 0.35, HiRes steps at least 1.5x the number of generation steps.

N.B. You do NOT need to get the image to the final desired resolution at this stage, you can make it bigger and sharper much more quickly using the Upscalers in Extras.

Once you've got the basics, then it's time to experiment to find the settings you like the best.

Also, Denoising strength greatly affects how much vRAM you use - with a 1070 I'll pretty much guarantee you won't be able to HiResFix images to 2048x2048 in a one-er with a worthwhile denoising strength (and you shouldn't even need to try). Make them at 512x512 then HiResFix upscale to 832x832 or similar - keep an eye on Task Manager to see your vRAM usage.

Take great care with the parameters you choose - you seem to be putting some weird settings in some places.
 
Last edited:

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
For the first one (sorry to do these in reverse order):

Your denoising strength was 0 - basically a waste of time, but it should have just given you the original image without changes other than it being four times the number of pixels.
Looks to me like it's one of those odd things that SD sometimes does.

Also, for me, 4xUltrasharp is not the best for HiResFix. It's good for upscaling in Extras, but I've always been disappointed with it in HiResFix
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
You need to do things in a methodical and step by step way.

I have been skim reading through all the posts and you are shotgunning things too much and jumping around from topic to topic. Settle down and take a deep breath. Now start from the beginning. If the image you get in txt2img is distorted or deformed there's no point in moving forward. It would require way too much fixing with inpaint and is simply not worth the hazzle or time. Instead you need to figure out why you get bad image in the first place, adjust the prompt and settings.

Yes do normal generations first, meaning no hiresfix. Then when you get an image that is not distorted re-use the same seed and prompt but activate hiresfix, this will do the upscaling and ad more detail. This is enough most of the time. If you want or need to fix a minor detail after this then you go to inpaint and fix it. If after it's fixed it doesn't have a nice transition, meaning it looks copy/pasted you do an img2img generation with a very low denoising strength setting. This will make the hand for instance that you fixed look natural and not copy/pasted.

When you ask for help, it's no use to post the distorted image that you got after a botched img2img or inpaint generation. Give us the image from txt2img instead. It's impossible to fix that image with floating head etc.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
For your first project with SD start way more simple. Don't do a complicated pose and with fingering and/or other interactions, these are very complicated and difficult even for an experienced creator.
Start with the most very basic. "A beautiful woman standing" . That's it.. If you can't get the most simple right how are you going to be able to do anything more complicated?..
Generate the first image. What's wrong with it? write these things in the negative prompt, the things you don't want.
Typically distorted hand, fused fingers, extra fingers/hands/leg etc.
Then generate again. Adjust the settings, the amount of steps cfg etc. Generate again... Be methodical.. Step by step.

When you have an image that looks good you activate hiresfix. Press the green recycle button to re-use the seed. Set the upscaler to 4x ultrasharp and the denoising to 0.2-0.4 . I would recomend 0.3 . Then generate. Now hopefully you have something decent. Post it and be happy. Next time something slightly more challenging, maybe a specific clothing or a slightly more complicated pose, still standing. Lying down or sitting is more complicated and difficult to get right and not something you should attempt until you have at least a few more projects completed.
 

Fuchsschweif

Active Member
Sep 24, 2019
986
1,563
For your first project with SD start way more simple. Don't do a complicated pose and with fingering and/or other interactions, these are very complicated and difficult even for an experienced creator.
Start with the most very basic. "A beautiful woman standing" . That's it.. If you can't get the most simple right how are you going to be able to do anything more complicated?..
Who said I can't get the most simple? I can easily generate "a beautiful woman standing". I am posting the things here I actually have problems with because that's the next step I want to reach. I already told you that I'm experienced when it comes to prompt-engineering since I've been using MJ for a long time. My struggle currently is to understand how to get the best out of the upscalers, so that I have good looking, sharp and detailled results!

If the image you get in txt2img is distorted or deformed there's no point in moving forward. It would require way too much fixing with inpaint and is simply not worth the hazzle or time.
What if I like everything on the picture except a single thing? Then that's where inpaint comes into play.. this is the whole appeal of it, isn't it? I got a nice pose, nice face, angle, everything, but let's say a hand is off, or a boob or a foot. Then I can fix that little detail with inpaint instead of rolling the dice 10 times again trying to be lucky so that SD doesn't mess anything up.

I think that's a way more precise way of working rather than just re-generating until something good comes out by luck.


Generate the first image. What's wrong with it? write these things in the negative prompt, the things you don't want.
Typically distorted hand, fused fingers, extra fingers/hands/leg etc.
Then generate again. Adjust the settings, the amount of steps cfg etc. Generate again... Be methodical.. Step by step.
SD sometimes simply ignores these negative prompts. In my negative prompts are things like:

"crooked fingers, weird hands, ugly hands, unproportional hands, more than 5 fingers per hand" and so on, I gave them weight, braces, but SD sometimes still messes these things up. I know how to use negative prompts, that was something that I used a lot on MJ too.

But when SD messes up the hands still and I got a picture that's close-to-be-perfect, then inpaint should provide the little fix I am searching for.. that's why I've been asking about the inpaint settings. I can already easily generate normal stuff, my next step up is to

a) learn proper use of inpaint

b) learn how SD processes upscaling


According to a post above, SD allegedly upscaled my picture already in the making while I had the upscaler set to "none". So I have to look if I have to turn other faders down to 0 despite having specified that I don't want to upscale..
 
Last edited:

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
Who said I can't get the most simple? I can easily generate "a beautiful woman standing". I am posting the things here I actually have problems with because that's the next step I want to reach. I already told you that I'm experienced when it comes to prompt-engineering since I've been using MJ for a long time. My struggle currently is to understand how to get the best out of the upscalers, so that I have good looking, sharp and detailled results!



What if I like everything on the picture except a single thing? Then that's where inpaint comes into play.. this is the whole appeal of it, isn't it? I got a nice pose, nice face, angle, everything, but let's say a hand is off, or a boob or a foot. Then I can fix that little detail with inpaint instead of rolling the dice 10 times again trying to be lucky so that SD doesn't mess anything up.

I think that's a way more precise way of working rather than just re-generating until something good comes out by luck.




SD sometimes simply ignores these negative prompts. In my negative prompts are things like:

"crooked fingers, weird hands, ugly hands, unproportional hands, more than 5 fingers per hand" and so on, I gave them weight, braces, but SD sometimes still messes these things up. I know how to use negative prompts, that was something that I used a lot on MJ too.

But when SD messes up the hands still and I got a picture that's close-to-be-perfect, then inpaint should provide the little fix I am searching for.. that's why I've been asking about the inpaint settings. I can already easily generate normal stuff, my next step up is to

a) learn proper use of inpaint

b) learn how SD processes upscaling


According to a post above, SD allegedly upscaled my picture already in the making while I had the upscaler set to "none". So I have to look if I have to turn other faders down to 0 despite having specified that I don't want to upscale..
Nothing "alleged" about it, it did! This showed in the PNGInfo. Also, the image was 1024x1024. Had it not been upscaled it would have been 512x512.
The Upscaler used was None, but it still upscaled it. To switch the HiResFix off, you need to set the ratio to 1, Upscaler to None and the Denoising strength to 0. It used to be a tick box, but it seemed to untick/retick itself randomly, so I guess it was junked around 1.5.0.

We're not meaning to be condescending, none of us here are experts - the mistakes you're making are very basic ones that we all made during the early stages. Experience with MJ doesn't help much there, even sometimes with prompting as SD works differently under the hood.

Hands & feet are SD's real weakness, negative prompts are not really able to fix much with them. There are some negative embeddings available on Civitai that can help, as can ADetailer. Unfortunately though, some inpainting may be required for an image that is otherwise perfect but has an extra finger etc.
 

devilkkw

Member
Mar 17, 2021
308
1,053
Your captioning is wrong for the images that's wearing the tshirt.
You need to tell the AI that the person is wearing the clothing, not a tshirt. So you need to say that the person is wearing your trigger word.
It's probably much better to just use images of ppl wearing the clothing as well, since it's very unlikely that you'll use it in any other situation.
Thank for your answer, so you think for example a good captioning for is:
man wearing blue cloth1 ?

Either your lora or the model used for the images has a horrible issue with oversaturating. It's very obvious in the image of the girl, but it's pretty clear in the other ones as well. Given the images below as well it seems to be the model that's performing very badly.
The over saturation is because i used cfg 22. i know is high cfg and get wrong result, but on some lora i've downloaded going over 15 get not image as result, but only crap colors like stop generation at step 1, so i tested at high cfg to see if lora i've made get result or have same problem i described.

And no, steps (as in image count x number of repeats) and epoch values aren't interchangeable in that way.
50 steps x 10 epochs is in no way the same training as 100 steps x 5 epochs



There's a lot of things that happen at the end of each epoch, this get "reset/restarted", things the "written" etc
Not sure how to best explain this....
The more you cycle through your images per epoch the more detail gets picked up, that "learned detail" gets used for further learning, more "finetuning" in the next epoch. To either correct wrongs or improve.
While it might not be 100% accurate, "loss" can be considered a representation of difference between what the AI expected and the actual value.
So lower loss suggest the AI predicted things closer too what happened.

So for most things you want the majority of your total step count to come from images x repeats and relatively low epoch count. Mostly you probably should need more than 5-10 epochs, exception generally being style.
Currently tested it, and with 5 epoch i reach loss 0.05. Seem good but lora is really overtrained. Maybe find good value for step at # epoch is the way, just need to try.
Btw with 1 epoch 100 step seem reach closer result and loss stay around 0.12
Every epoch made loss going down( for what i've see during training) but get really oversaturated result, on low cfg.

About style training, what setting you suggest to start experimenting?
 

Fuchsschweif

Active Member
Sep 24, 2019
986
1,563
For your first project with SD start way more simple. Don't do a complicated pose and with fingering and/or other interactions, these are very complicated and difficult even for an experienced creator.
Start with the most very basic. "A beautiful woman standing" . That's it.. If you can't get the most simple right how are you going to be able to do anything more complicated?..
Jimwalrus

As a proof of what I wrote above, here's a generation. This are the prompts:


Positive: 1 girl in a black matrix coat, standing on a rooftop, upper body shot closeup:1.5, cinematic shot, golden sidelight on her face:1.4, foggy atmosphere, neon glowing in the back, rainy day, cloudy, cyberpunk cityscape in the back, blade runner style, cyberpunk style, serious look, rough and moody atmosphere, gritty style, photoshooting

Negative: digital painting, crooked hands, off proportions, multiple persons


This is the result: 00056-2549670335.png


So that's a pretty neat outcome! She's wearing the black matrix-like coat as intended, there is the detail with the golden side lighting on her face because I gave it weight, the cyberpunk cityscape in the back, rain, foggy and rough atmosphere, and also the close up upper body shot instead of a panorama upper body shot. I had to refine the braces and weights to get this.

As you see, prompt engineering isn't my issue.

But what I can't wrap my head around is how to upscale this without getting some wildly different results. Because if I do this:


When you have an image that looks good you activate hiresfix. Press the green recycle button to re-use the seed. Set the upscaler to 4x ultrasharp and the denoising to 0.2-0.4 . I would recomend 0.3 . Then generate. Now hopefully you have something decent.
I get this result:

1697205546122.png

You don't have permission to view the spoiler content. Log in or register now.

That was with denoising strength set to 0,02 and the exact seed and prompts from the picture above. But SD composes something completely different. What did I do wrong?
 

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
Jimwalrus

As a proof of what I wrote above, here's a generation. This are the prompts:


Positive: 1 girl in a black matrix coat, standing on a rooftop, upper body shot closeup:1.5, cinematic shot, golden sidelight on her face:1.4, foggy atmosphere, neon glowing in the back, rainy day, cloudy, cyberpunk cityscape in the back, blade runner style, cyberpunk style, serious look, rough and moody atmosphere, gritty style, photoshooting

Negative: digital painting, crooked hands, off proportions, multiple persons


This is the result: View attachment 3002656


So that's a pretty neat outcome! She's wearing the black matrix-like coat as intended, there is the detail with the golden side lighting on her face because I gave it weight, the cyberpunk cityscape in the back, rain, foggy and rough atmosphere, and also the close up upper body shot instead of a panorama upper body shot. I had to refine the braces and weights to get this.

As you see, prompt engineering isn't my issue.

But what I can't wrap my head around is how to upscale this without getting some wildly different results. Because if I do this:




I get this result:

View attachment 3002662

You don't have permission to view the spoiler content. Log in or register now.

That was with denoising strength set to 0,02 and the exact seed and prompts from the picture above. But SD composes something completely different. What did I do wrong?
I'm away from my PC atm, so can't test anything, but please note "0.02" is NOT the same as 0.2!
It seems to be an error you make a lot. I don't think it's causing this, but it does affect things.
 
  • Like
Reactions: Dagg0th

Fuchsschweif

Active Member
Sep 24, 2019
986
1,563
I'm away from my PC atm, so can't test anything, but please note "0.02" is NOT the same as 0.2!
It seems to be an error you make a lot. I don't think it's causing this, but it does affect things.
I know, I set it to such a low number so that I could expect 99% of the same result with giving SD tiny room to change minor details if necessary. I haven't played around enough with the values yet to see how much "room" it needs, but as you said, with 0.02 we could've expected to get almost the exact same picture.
 
  • Like
Reactions: Jimwalrus

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
Without putting it forward as a fix for this issue, I'd recommend turning CLIP Skip off. It's really best to only use it with LoRAs where its use is suggested, or as a last resort if you just can't get the damn thing to create what you want.
 
  • Like
Reactions: Fuchsschweif

Fuchsschweif

Active Member
Sep 24, 2019
986
1,563
Without putting it forward as a fix for this issue, I'd recommend turning CLIP Skip off. It's really best to only use it with LoRAs where its use is suggested, or as a last resort if you just can't get the damn thing to create what you want.
Thanks, yeah I changed that in the beginning when I installed the first Lora and it was recommended. Set it back to 1 now but yeah, unfortunately it's not the fix.
 
  • Like
Reactions: Jimwalrus

Jimwalrus

Active Member
Sep 15, 2021
930
3,423
Thanks, yeah I changed that in the beginning when I installed the first Lora and it was recommended. Set it back to 1 now but yeah, unfortunately it's not the fix.
AFAIK 'Off' is Zero not 1 for Clip Skip.
I don't use it much, so can't say for sure.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,531
3,618
I know, I set it to such a low number so that I could expect 99% of the same result with giving SD tiny room to change minor details if necessary. I haven't played around enough with the values yet to see how much "room" it needs, but as you said, with 0.02 we could've expected to get almost the exact same picture.
I think for upscaling the denoise should be closer to 50% rather than 2%. Try it in your setup, you might be surprised that the "change minor details" in practice starts being meaningfully affected after 25%-30% rather than 2%. I know it was the case in my setups and I was surprised realizing that the thing is rather progressive / exponential in nature.
 

Fuchsschweif

Active Member
Sep 24, 2019
986
1,563
AFAIK 'Off' is Zero not 1 for Clip Skip.
I don't use it much, so can't say for sure.
I think 1 is off, you can't set it to off completely. At 1 the tooltip says "ignores no layers".
I can check that back later when I use SD again.

I think for upscaling the denoise should be closer to 50% rather than 2%. Try it in your setup, you might be surprised that the "change minor details" in practice starts being meaningfully affected after 25%-30% rather than 2%.
I'll play around with that once I can get SD to re-create my seeds, right now I get completely different results..
 
  • Like
Reactions: Jimwalrus

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,794
Workflow Example

Prompt:

a beautiful woman standing, beautiful eyes, detailed

neg: fused hands, blurry

checkpoint: revAnimated
-----------------------------------------------------------
(no loras or embedings etc, nada.)

After a few generated images:

00047-3287071526.png

Now press the green recycle button. Activate hiresfix and select ultrasharp, set 0.3 denoising.
The first image is blurry but looks the same otherwise. so I ad "blurry" to negative and generate again with hiresfix.
Next image is changing completely to a different one. This is because I didn't set the hiresfix steps. In order to keep the composition use double the amount of steps you use for sample steps. Meaning 20 sample steps times 2 = 40.
Now I'm generating again with 40 hires steps.

00048-3287071526.png

Next Project.

a beautiful Asian woman standing, beautiful eyes, detailed.
wearing a cropped top and short shorts, hand on hip.

neg:fused hands, blurry

First image, the eyes are a bit wonky and the fingers are bad.
I re-use the same seed now by pressing the green recycle button, before I used random (-1).
I add symmetry to pos and asymmetrical to neg.
It can be good to use a static seed when adjusting and adding things to the prompt to see the effect. Though sometimes no changes happen until you use a new seed.
Ok now I have a decent image.
00050-1149865719.png

I activate the hiresfix again and use the same settings, meaning 40hires steps and 0.3 denoising with ultrasharp upscaler.
00051-1149865719.png
It's not 100 percent perfect but it's decent enough for this "little" tutorial...

Next Project

we ad a background and being more specific with the prompt, we also start to organize it better.

a beautiful Asian woman standing, beautiful eyes, smile.
wearing a lace thong, topless, hand on hip.
busty, big nipple, curvy,
bedroom with curtains.
detailed, symmetry.

neg:asymmetrical, fused hands, blurry

First image is pretty good but have things that needs to be improved.
I add more tags to negative and generate new images after every adjustment. You can add several tags in one go if it's for the same problem such as hands or fingers etc. Sometimes I use a static seed when I adjust the prompt to see the effects but if nothing changes I try a new seed.
I got a pretty nice image but the hands are not good even after added tags and several images so I try the extension named After Detailer with the hand model and a negative prompt for the hands. Now the image is better. Time to activate hiresfix.
It's best to be methodical and add or adjust one thing at a time and introduce only extensions or additions such as Loras etc for a specific intended result and as needed. Starting out with adding a lot of things without knowing what will happen is the same as throwing everything at the wall and hoping it will stick, it's guaranteed to fail.

After I generated the image with hiresfix the hands were not good so I go back to normal generation in other words hunting for a better seed. I get a better image with a better success potential because the fingers are more hidden.
Again I use hiresfix.
00068-1982251251.png
Pretty nice image but not perfect. The hands can be fixed with inpaint but I want to finish this tutorial today...
So I only give it a quick go. Press send to img2img inpaint tab (color palette button).
Remove everything in pos prompt and add "detailed hand, detailed fingers" only. Mask one hand. set the denoise to 0.2 and masked only.
Very important, don't use static seed, set it to -1. Press generate. It wasn't enough so increase the denoising to 0.3 .
Still wasn't enough. Increase the denoising to 0.4 and generate again. Slightly better but can be improved more, I increase the steps as a test. There we go. Decent enough..

00012-2796811266.png

Conclusion

prompt:

a beautiful Asian woman standing, beautiful eyes, smile.
wearing a lace thong, topless, hand on hip.
busty, big nipple, curvy.
bedroom with curtains.
detailed, symmetry.

neg:
ugly, asymmetrical, bad anatomy, poorly drawn.
deformed iris, deformed pupil, cross eyed, lazy eyed.
fused hands, deformed hand, fused fingers, deformed fingers, (extra fingers).
blurry image.
cape.

Used ADetailer with hand model with inpainting denoise 0.3 and neg prompt:
bad hands, poorly drawn hands, deformed hand, deformed fingers, extra fingers, fused fingers.

hiresfix, upscaler 4x_Ultrasharp 40 hires steps deoising strength 0.3 multiplier:2
---------------------------------------------------------------------------------------------------
Fixed hands with inpainting

inpaint prompt "detailed hand, detailed fingers"

only masked padding, pixels 48
inpaint area, masked only
sampling steps 26
denoising strength 0.4
seed: -1

You can see everything else in PNG Info tab with each image.
I hope this was helpful and I didn't miss something.
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,531
3,618
Workflow Example

Prompt:

a beautiful woman standing, beautiful eyes, detailed

neg: fused hands, blurry

checkpoint: revAnimated
-----------------------------------------------------------
(no loras or embedings etc, nada.)

After a few generated images:

View attachment 3002628

Now press the green recycle button. Activate hiresfix and select ultrasharp, set 0.3 denoising.
The first image is blurry but looks the same otherwise. so I ad "blurry" to negative and generate again with hiresfix.
Next image is changing completely to a different one. This is because I didn't set the hiresfix steps. In order to keep the composition use double the amount of steps you use for sample steps. Meaning 20 sample steps times 2 = 40.
Now I'm generating again with 40 hires steps.

View attachment 3002631

Next Project.

a beautiful Asian woman standing, beautiful eyes, detailed.
wearing a cropped top and short shorts, hand on hip.

neg:fused hands, blurry

First image, the eyes are a bit wonky and the fingers are bad.
I re-use the same seed now by pressing the green recycle button, before I used random (-1).
I add symmetry to pos and asymmetrical to neg.
It can be good to use a static seed when adjusting and adding things to the prompt to see the effect. Though sometimes no changes happen until you use a new seed.
Ok now I have a decent image.
View attachment 3002645

I activate the hiresfix again and use the same settings, meaning 40hires steps and 0.3 denoising with ultrasharp upscaler.
View attachment 3002654
It's not 100 percent perfect but it's decent enough for this "little" tutorial...

Next Project

we ad a background and being more specific with the prompt, we also start to organize it better.

a beautiful Asian woman standing, beautiful eyes, smile.
wearing a lace thong, topless, hand on hip.
busty, big nipple, curvy,
bedroom with curtains.
detailed, symmetry.

neg:asymmetrical, fused hands, blurry

First image is pretty good but have things that needs to be improved.
I add more tags to negative and generate new images after every adjustment. You can add several tags in one go if it's for the same problem such as hands or fingers etc. Sometimes I use a static seed when I adjust the prompt to see the effects but if nothing changes I try a new seed.
I got a pretty nice image but the hands are not good even after added tags and several images so I try the extension named After Detailer with the hand model and a negative prompt for the hands. Now the image is better. Time to activate hiresfix.
It's best to be methodical and add or adjust one thing at a time and introduce only extensions or additions such as Loras etc for a specific intended result and as needed. Starting out with adding a lot of things without knowing what will happen is the same as throwing everything at the wall and hoping it will stick, it's guaranteed to fail.

After I generated the image with hiresfix the hands were not good so I go back to normal generation in other words hunting for a better seed. I get a better image with a better success potential because the fingers are more hidden.
Again I use hiresfix.
View attachment 3002697
Pretty nice image but not perfect. The hands can be fixed with inpaint but I want to finish this tutorial today...
So I only give it a quick go. Press send to img2img inpaint tab (color palette button).
Remove everything in pos prompt and add "detailed hand, detailed fingers" only. Mask one hand. set the denoise to 0.2 and masked only.
Very important, don't use static seed, set it to -1. Press generate. It wasn't enough so increase the denoising to 0.3 .
Still wasn't enough. Increase the denoising to 0.4 and generate again. Slightly better but can be improved more, I increase the steps as a test. There we go. Decent enough..

View attachment 3002762

Conclusion

prompt:

a beautiful Asian woman standing, beautiful eyes, smile.
wearing a lace thong, topless, hand on hip.
busty, big nipple, curvy.
bedroom with curtains.
detailed, symmetry.

neg:
ugly, asymmetrical, bad anatomy, poorly drawn.
deformed iris, deformed pupil, cross eyed, lazy eyed.
fused hands, deformed hand, fused fingers, deformed fingers, (extra fingers).
blurry image.
cape.

Used ADetailer with hand model with inpainting denoise 0.3 and neg prompt:
bad hands, poorly drawn hands, deformed hand, deformed fingers, extra fingers, fused fingers.

hiresfix, upscaler 4x_Ultrasharp 40 hires steps deoising strength 0.3 multiplier:2
---------------------------------------------------------------------------------------------------
Fixed hands with inpainting

inpaint prompt "detailed hand, detailed fingers"

only masked padding, pixels 48
inpaint area, masked only
sampling steps 26
denoising strength 0.4
seed: -1

You can see everything else in PNG Info tab with each image.
I hope this was helpful and I didn't miss something.
You are in that sushi mood, lol!