[Stable Diffusion] Prompt Sharing and Learning Thread

modine2021

Member
May 20, 2021
417
1,389
The issue come also to me with some extension, move the to a backup folder and run without any extension, just for see if cause is some extension. if you have icon back, some extension have not css updated.
Also using browser inspector is good to see error on page and understand what are breaking css.
yes. extensions. move some out. and worked. then it did it again. move more out and it worked. then it did it again too on restart. move them all out and it worked. then moved them all back and it worked for awhile. then it did it again. damn annoying. i think there is a limit to how many extensions you can have??
 

me3

Member
Dec 31, 2016
316
708
I've been trying to get some kind of "cabaret stage" type background on this for a day or two now, anybody else want to try?

View attachment 3256695
not looked at the prompt so sorry if you're done this, but referencing a RL cabaret seems to work. So assuming you're looking for some "old french cabaret" you can try something like "at Moulin rouge" or some kind of reference to that. It seemed to work in the few tests i did, it might not be fully "on the stage", but there were several elements you'd expect in the background.
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I found an excellent video tutorial about changing the background of an existing image.
He goes over 2 methods.
I highly recommending getting the 2 extensions in method no2 . It creates a mask for you in a very fast and convenient way.
You will send it to inpaint and then "outpaint" the background.

Inpaint anything 1.png Inpaint anything 2.png
 

devilkkw

Member
Mar 17, 2021
324
1,097
yes. extensions. move some out. and worked. then it did it again. move more out and it worked. then it did it again too on restart. move them all out and it worked. then moved them all back and it worked for awhile. then it did it again. damn annoying. i think there is a limit to how many extensions you can have??
for what i know, is not about number of extension, but on what order extension is loaded, some extension have annoying css not updated, and you need to fix it manually, use inspector on browser and view error is helpful to find what cause issue.
 

Synalon

Member
Jan 31, 2022
225
663
not looked at the prompt so sorry if you're done this, but referencing a RL cabaret seems to work. So assuming you're looking for some "old french cabaret" you can try something like "at Moulin rouge" or some kind of reference to that. It seemed to work in the few tests i did, it might not be fully "on the stage", but there were several elements you'd expect in the background.
I've tried referencing crazy horse cabaret, also cabaret inspired background, on stage at a cabaret.

Plus few others.
 

me3

Member
Dec 31, 2016
316
708
I've tried referencing crazy horse cabaret, also cabaret inspired background, on stage at a cabaret.

Plus few others.
Looking at the filename, is the "midas" in references to using midas depth map?
When i've had some bad experiences with controlnet really messing with background elements, in many cases it's completely prevented background changes. So could be worth just running a quick check with that disabled, if you haven't already done so.
Other option is the model is just unfamiliar with the concept. I'll have a go at it and see if i can get some "on stage performances" :)
 
  • Like
Reactions: Mr-Fox

Synalon

Member
Jan 31, 2022
225
663
Looking at the filename, is the "midas" in references to using midas depth map?
When i've had some bad experiences with controlnet really messing with background elements, in many cases it's completely prevented background changes. So could be worth just running a quick check with that disabled, if you haven't already done so.
Other option is the model is just unfamiliar with the concept. I'll have a go at it and see if i can get some "on stage performances" :)
It is the depth map, but I always do that as a last step after generating an image before calling it done.
I was having trouble getting a cabaret type background even before that.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
It is the depth map, but I always do that as a last step after generating an image before calling it done.
I was having trouble getting a cabaret type background even before that.
I've not spent any time trying to fix the various faults in these, all are generated with the same simple prompt, but different models.
No idea if this is the sort of "scene" you're after, maybe i've been shooting at the wrong goal.

_Cab (1).jpg
_Cab (2).jpg _Cab (3).jpg _Cab (4).jpg

You don't have permission to view the spoiler content. Log in or register now.
 

Synalon

Member
Jan 31, 2022
225
663
I've not spent any time trying to fix the various faults in these, all are generated with the same simple prompt, but different models.
No idea if this is the sort of "scene" you're after, maybe i've been shooting at the wrong goal.

View attachment 3259916
View attachment 3259915 View attachment 3259914 View attachment 3259913

You don't have permission to view the spoiler content. Log in or register now.
Something like this yeah, without the crowd in sight and with minimal people to have to fix.
Usually cabaret is on a stage though and not a ballroom, and with some mind of stagelights, spotlights.

I had something like this video in mind.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
Something like this yeah, without the crowd in sight and with minimal people to have to fix.
Usually cabaret is on a stage though and not a ballroom, and with some mind of stagelights, spotlights.

I had something like this video in mind.
I suspect the AI is a bit unfamiliar with what would be considered "on stage", it's probably too a specific/technical term.
Best option might be to find relatively clean images of what you'd want, cut out things if need be, even pasting things together can work.
Then use those images to create a background.
I doubt depth mapping will work (at least in terms of flexibility), so maybe something like IPadapter masking/layering for background or you could try doing latent composition. Either by using the images as "seeds" for a generation or it might work converting the image directly if it's good/clean enough. Running the composition through sampling should let you control some of the style and let things "melt" together
At least with those methods you can force the background and direct it more like you want
 
  • Like
Reactions: Mr-Fox

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I suspect the AI is a bit unfamiliar with what would be considered "on stage", it's probably too a specific/technical term.
Best option might be to find relatively clean images of what you'd want, cut out things if need be, even pasting things together can work.
Then use those images to create a background.
I doubt depth mapping will work (at least in terms of flexibility), so maybe something like IPadapter masking/layering for background or you could try doing latent composition. Either by using the images as "seeds" for a generation or it might work converting the image directly if it's good/clean enough. Running the composition through sampling should let you control some of the style and let things "melt" together
At least with those methods you can force the background and direct it more like you want
Have you tried latent composition? It sounds interesting. :) I have seen it here and there at a glance but not tried it.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,571
3,768
Have you tried latent composition? It sounds interesting. :) I have seen it here and there at a glance but not tried it.
I tried those, and eventually switched to i2i as being far superior.

The issue with latent compose is say a coconut in a background image gets developed into a random face in addition to the foreground image also being developed into the face you actually want. And then you have all these extra faces and people polluting your image. So, there is quite a high percent of refuse.

I attached a sample for latent compose. These are made with foreground latent of a woman onto latent of a beach, then developed together. The problem you get lucky very very infrequently (comparatively), and furthermore, some backgrounds result in success ratio being something close to 0.0001% success.

So, i ran with that workflow for a while, enough to embrace i2i as the superior workflow.

PS. I came to a conclusion that I am better off separately generating foreground pixels and background, composing them together as pixels, then running i2i on it to bake the look. That's how much i2i is better than latent compose. Also, in this case your pixels can be waaay under developed and stay at ~5 - 8 steps rather than 20.

a_03359_.png
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
I tried those, and eventually switched to i2i as being far superior.

The issue with latent compose is say a coconut of a background image gets developed into a face in addition to the foreground image also being developed into a face. And then you have all these extra faces and people polluting your image. So, there is quite a high percent of refuse.

I attached a sample for latent compose. These are made with foreground latent of a woman onto latent of a beach, then developed together. The problem you get lucky very very infrequently (comparatively), and furthermore, some backgrounds result in success ratio being something close to 0.0001% success.

So, i run with that workflow for a while, enough to embrace i2i as the superior workflow.

PS. I came to a conclusion that I am better off separately generating foreground pixels and background, composing them together as pixels, then running i2i on it to bake the look. That's how much i2i is better than latent compose. Also, in this case your pixels can be waaay under developed and stay at ~5 - 8 steps rather than 20.

View attachment 3262341
So.. The truth finally comes out. You are cheating.. :p :LOL:
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
was just teasing in case someone is wondering . .:giggle:

Seph has his own distinct original style and always produce jaw dropping stuff. :)
 
Last edited:
Jan 30, 2023
17
7
Looking for help merging two faces to create one, reusable face.
I tried to train dreambooth with about 30 images, 15 of each face, but i don't have enough vram and it crashes. I have a 1660ti on my laptop.

I want to use two faces.... of women I know, and merge them together to create a totally new face.. I don't want to use either of their faces for obvious reasons...

I could maybe try and create a lora but i'm having difficulty.

Any tips on merging two faces, or training two faces.

I currently use Reactor with great results, but want a unique female face.
 
Jan 30, 2023
17
7
Have you tried to generate images including both in the generation (assuming they are somewhat famous and recognized by the AI): (Margot Robbie:0.6, Scarlett Johansson:0.7)
Prompts included in this quickly generated image:
View attachment 3265668
Ahh thanks for the tip, but their not famous.. i'm familiar with the method you mentioned. I guess i need a better video card so I can run dreambooth...
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Ahh thanks for the tip, but their not famous.. i'm familiar with the method you mentioned. I guess i need a better video card so I can run dreambooth...
An alternative is to use ip adapter in controlnet. You would need to use one "unit" for each and then set the control weight and starting control step etc to mix them. i have no idea if it works well but it's worth a try.