[Stable Diffusion] Prompt Sharing and Learning Thread

felldude

Active Member
Aug 26, 2017
549
1,571
Using the merge I made and messing about with the sampler and scheduler and some prompts I managed to get slightly better images.
I don't know if the attention would affect the image quality, but did you compare your Iterations per second vs XFormers
 

Synalon

Member
Jan 31, 2022
224
661
I don't know if the attention would affect the image quality, but did you compare your Iterations per second vs XFormers
It seems slower overall now, it was taking roughly 320 seconds for 4 images before now its taking 1200 seconds
 

felldude

Active Member
Aug 26, 2017
549
1,571
It seems slower overall now, it was taking roughly 320 seconds for 4 images before now its taking 1200 seconds
Interesting thanks for testing, in my test I had no speed difference on any model, But I do have the CuDnn files in the Comfy Torch build.

Good to know Xformers works for an FP8 model at least when its upcast with BF16
 
  • Like
Reactions: Synalon

Markbestmark

Member
Oct 14, 2018
290
313
I can't remember tbh, it was a long time since I experimented with it.

This was only a test I did to see how it would work with faceswap:

Source (right click and set to loop)The Result.. (right click and set to loop)
View attachment 3589592 View attachment 3589593

I made this by using the batch function in img2img. First open the source in photoshop, crop it to the right resolution and adjust the length of the video. Then export it as video frames (images). Put these in an input folder. Now you can use these video frames as input or source images in img2img and do what ever changes you wish, including using controlnet. Save the output images in an output folder. Now you need to put it back together into a video again. I used Flowframes. It can double the fps if you wish.
Tutorial:

Source (right click and set to loop)The Result.. (right click and set to loop)
View attachment 3589551 View attachment 3589555

Keep in mind that converting the files to webm is decreasing the quality.
I can't remember tbh, it was a long time since I experimented with it.

This was only a test I did to see how it would work with faceswap:

Source (right click and set to loop)The Result.. (right click and set to loop)
View attachment 3589592
View attachment 3589593

I made this by using the batch function in img2img. First open the source in photoshop, crop it to the right resolution and adjust the length of the video. Then export it as video frames (images). Put these in an input folder. Now you can use these video frames as input or source images in img2img and do what ever changes you wish, including using controlnet. Save the output images in an output folder. Now you need to put it back together into a video again. I used Flowframes. It can double the fps if you wish.
Tutorial:

Source (right click and set to loop)The Result.. (right click and set to loop)
View attachment 3589551
View attachment 3589555

Keep in mind that converting the files to webm is decreasing the quality.
I think for the faceswaps it's easier just to use face fusion, and the result should be good. I do love though how you made vide2video by actually making a frames animation. I was trying to do something simillar but tried only with cartoons. but my results are not stable at all. Could you give some tips for me? test_animation.gif
View attachment 1st_PASS_00041.mp4
View attachment 1st_PASS_00039.mp4
tom_holland_legolas_youtube_00115_.png tom_holland_legolas_youtube_00114_.png
 

Markbestmark

Member
Oct 14, 2018
290
313
I merged the Schnell and Dev FP8 models and made these renders as an experiment.

You don't have permission to view the spoiler content. Log in or register now.

Edit: Randomly Added new images.
do you have an examples of animations for those pictures? Have you tried to make some? I am searching as much as I can to make vid2vid or img2vid that would suit to start making games =D So would be great if you can share some tips =P
 

Synalon

Member
Jan 31, 2022
224
661
do you have an examples of animations for those pictures? Have you tried to make some? I am searching as much as I can to make vid2vid or img2vid that would suit to start making games =D So would be great if you can share some tips =P
I haven't tried to animate yet, I'm not good at creating workflows and Flux uses some nodes I haven't seen before so I don't know how to integrate them into an animation workflow yet.

If somebody is willing to make a workflow with flux that can animate and share it I will give it a try.

The alternative I have is to just load one of those images into animated diff and use another checkpoint to animate, and that will probably lose the clarity.
 

Markbestmark

Member
Oct 14, 2018
290
313
1723327880494.png
I haven't tried to animate yet, I'm not good at creating workflows and Flux uses some nodes I haven't seen before so I don't know how to integrate them into an animation workflow yet.

If somebody is willing to make a workflow with flux that can animate and share it I will give it a try.

The alternative I have is to just load one of those images into animated diff and use another checkpoint to animate, and that will probably lose the clarity.
that basically what I was doing with my animation =D so that's why my results are not so clear as your pictures. I hoped that someone here would actually help me to achieve better quality =)
 

Onetrueking

Member
Jun 1, 2020
145
517
Is it possible to train lora style on these images if i have a lot of similar? Don’t know much about training lora, but i’m out of ideas what lora i need to mix with concept art twilight to get something like that.
0F19A9B7-923E-4985-BBB8-5AF0FBAED976.jpeg 64D19546-F60B-45DA-A727-3A32D29B5FBF.jpeg 9EC6B1A1-0D00-4106-A55B-63D8E7CA84C9.jpeg BCCFEC14-02F4-4C7A-A8B3-AA8E55796199.jpeg