I tried those, and eventually switched to i2i as being far superior.
The issue with latent compose is say a coconut of a background image gets developed into a face in addition to the foreground image also being developed into a face. And then you have all these extra faces and people polluting your image. So, there is quite a high percent of refuse.
I attached a sample for latent compose. These are made with foreground latent of a woman onto latent of a beach, then developed together. The problem you get lucky very very infrequently (comparatively), and furthermore, some backgrounds result in success ratio being something close to 0.0001% success.
So, i run with that workflow for a while, enough to embrace i2i as the superior workflow.
PS. I came to a conclusion that I am better off separately generating foreground pixels and background, composing them together as pixels, then running i2i on it to bake the look. That's how much i2i is better than latent compose. Also, in this case your pixels can be waaay under developed and stay at ~5 - 8 steps rather than 20.
View attachment 3262341