Honestly looking through your post history I'm impressed that you've stuck with ComfyUI as long as you did, most people try it out and dip out because honestly, it's fucking hard. I'll give you some pointers since it's a LOT to take in but once you understand the basics you'll be able to generate good images.
Looking at your workflow you've got the basics down, I notice you're using a lighting model, in short a lighting model generates images faster with less steps but there's a cost for the faster generation (i don't remember what the troll takes from you). Which is fine, but you'll want to lower your steps to 8-14 ish (look up what the model recommends in the description), since doing more only adds time to your generations and time is gold in the world of AI.
Also, you'll want to change your Sampler_Name to dpmpp_2m since it generates real images better than euler, keep the Scheduler on Karras since dpmpp samplers work good with that scheduler. There are guides that explain what samplers to use with what scheduler but since you're learning just keep it simple for now.
Another thing with your image, you see how it's kinda burned/overexposed? That usually means the model had a hard time creating the image which could be a million things but the most common are 1) too high CFG, bring it down to 4-5 instead of 8, this allows the model to have more freedom to interpret your prompt but might not follow what you write. 2) The checkpoint/model isn't really trained on what you're prompting it. Specifically the keyword I noticed was "beautiful older woman", I use "mature woman" and it tends to work better but at lot of the models on Civitai are trained on younger women instead so maybe try a different model to see if you get better results? Prompting in it's self is a whole other talent and will drastically change your images.
Besides those things you should be able to generate decent images with that workflow you have, your only limitation will be your GPU and how much VRAM you have and of course your time.
To answer your question about fingers, it depends....sometimes you get lucky and the model produces a good image with good hands/fingers and not looking like some creature from the deep. Here are two images I created awhile back with only a text prompt.
View attachment 4821028
View attachment 4821030
I want to say I generated at least 10 images and out of those only like 3-4 had usable hands, but yes you can avoid the finger curse but what most people do is they fix the hands later down the line with inpainting or detailer nodes. Which is a more advanced concept and personally if I were you I'd spend more time learning how prompts work and how to at least get images you're happy about before going into fixing small parts of an image.
Edit: I was curious if I could recreate what you're trying to generate and the results weren't too bad, I couldn't figure out how to correctly fix the hands since the workflow I use didn't account for both hands, I usually only have one hand to fix lol. But as you can see I started with a pretty crap image and slowly iterated on it to what I wanted it to be. Left - > Right, 1) Initial text prompt image (really fugly) 2) Img 2 Img workflow to get some details before fixing hands and face 3) Face fixer and hands fixer...kinda, I still haven't learn how to fix hands especially realistic hands.
View attachment 4821124