By default Stable Diffusion (assuming it's stable diffusion you're using, not midjourney or DALL-E) generates different characters depending on the pose or seed. In each case you'd need a strategy to keep the characters consistent. Now, if you're using a website to generate your images, I think you'll have more limitations/constraints as I don't imagine they'll let you use controlnet.
If you're using a website, a hint I could give you is to write down your hyperparameters such as positive prompts, negative prompts, model used to generate the images, sampler type, sampling steps, CFG configuration value, width, height and
seed.
So, the next time you try to generate new poses, you just change the prompt pose description. Then, use face in-painting in-case you are not satisfied with the similarity and use your image as input.
Now, if you're using Stable Diffusion, you can load your configs by simply clicking on PNG info tab and load your generated image where you have a reference image or image zero and then click on send to text2image, then use openpose ((search civit ai)) and use reActor to automatically do the face in-paint, but do notice it will never be perfectly the same
Watch this video:
You must be registered to see the links
As the wolverine face changes, that is not a feature, but how stable diffusion works, you can search more videos for yourself, they use the extension animediff, you need camera angle tricks to keep the face visibly the same