Quick question (another one, I know): I'm struggling with getting some decent "photo realistic" pictures with ComfyUI, and I think I'm probably still missing something in my workflow. When I look at some "models" and "LoRAs" posted on
You must be registered to see the links
, I am able to see all the details they (apparently) used to create said picture. Positive prompt, negative prompt, cfg and seed values as well at the sampler/scheduler used.
But when I try to reproduce some of them, my results are way off from what is posted on ComfyUI.
Example: I've been trying to reproduce something like
You must be registered to see the links
, but no matter what workflow I'm trying to create (using the same checkpoints and LoRAs as posted in the link), the images generated are blurry (especially when upscaled) and don't look realistic at all.
Are these images posted on
You must be registered to see the links
post-processed through something like Photoshop?
What additional (essential) nodes do I need to add to my workflow to make the results more "crisp" and realistic?
This is what my current (basic) workflow looks like:
There's a couple of bypassed nodes which I was experimenting with, but without success.
This is what I want to achieve (all credits for this image goes to
You must be registered to see the links
on
You must be registered to see the links
!):
but this is what I get (using the exact same checkpoint, LoRA, prompts, cfg and seed values as well as the same sampler/scheduler and a square image):
Any help would be much appreciated!