[Stable Diffusion] Prompt Sharing and Learning Thread

Sepheyer

Well-Known Member
Dec 21, 2020
1,703
4,184
448
Oh, so turns out this is garbage. Apparently there is a model RMBG-1.4 out there that does a much better job. Imma upload that workflow in a moment.

Ignore everything below this line.
---------
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

Sepheyer

Well-Known Member
Dec 21, 2020
1,703
4,184
448
A bro just updated his anime-2-realism lora for Qwen 2509:

It is kinda getting there.

Definitely entertaining to redo some of your old HS2/Daz smut into the "real-life" versions.

Some images lend themselves better to this process than others, YMMV.

Out of the box it feels like the lora applies ray-tracing , grain and occlusion effects, whatever I mean by it.

Yet, checkout the cabin lights and cabin's top - these are not in the source image, but nicely inserted there by the lora. Nice!

Conversion
Source
qe_202510101536_00001_.png HS2_2023-08-30-06-21-25-913.png
 
Dec 31, 2017
271
428
266
where can i find Lora that are banned from civai? i need a bestiality lora for a rpg games(mainly wild animals, like wolf, boar, bears... like skyrim)
 

JhonLui

Well-Known Member
Jan 13, 2020
1,146
1,136
284
where can i find Lora that are banned from civai? i need a bestiality lora for a rpg games(mainly wild animals, like wolf, boar, bears... like skyrim)
try these:





but keep searching and if you come across a repository, post it
 

JhonLui

Well-Known Member
Jan 13, 2020
1,146
1,136
284
It kinda hit me that we are crossing the point of no return.
Yes, stop there..

Also I tried SdNext and have to say it's impressive it can handle prettymuch everything.. yet still a bit buggy.
For example I couldn't try Wan Video since it identifies the cpu as cuda too and goes in conflict... I guess I'll have to read the instructions somewhere...
 
Dec 31, 2017
271
428
266
try these:





but keep searching and if you come across a repository, post it
no luck brother, all of the ones i found on aibooru dont have model and the civai ones lost their mirrors for some reason so not downloadable
 
  • Sad
Reactions: JhonLui

JhonLui

Well-Known Member
Jan 13, 2020
1,146
1,136
284
no luck brother, all of the ones i found on aibooru dont have model and the civai ones lost their mirrors for some reason so not downloadable


This is the only one I have of the kind.. I didn't foresee the (one-sided) purge coming at the time.
It's Illustrious.

Another workaround could be to use an old version of the checkpoint you're using, it might still have the models in it.
(found out by chance with old SDXL model which still have celebrities in it, so Maybe.. same goes for what you're looking for)

Hope it helps
 
  • Like
Reactions: CBTWizard and DD3DD

Sepheyer

Well-Known Member
Dec 21, 2020
1,703
4,184
448
Would anyone have any idea how this guy made this?

The details are next level - botox lips and forehead, uneven nipple, hair on her right (image-wise) triceps.

Is this a real girl actually? The author says these are "ai tools".

Thoughts?



lara_croft__the_edge_of_discovery_by_vilkin_dkppdnr-fullview.jpg
 
  • Hey there
Reactions: tankdick and DD3DD

osanaiko

Engaged Member
Modder
Jul 4, 2017
3,354
6,441
707
I think I found the original:

Nah, unless there are more in the set of that particular slag, there is nothing with a similar pose.

however the pic posted by Sepheyer could definitely be a SFW cosplayer pic that has been "ai enhanced" to get the titties out.
 
Last edited:

Midzay

Member
Game Developer
Oct 20, 2021
304
678
136
Hey guys, can you tell me what workflow/node I should be looking for? I was making 6000x6000 pixel renders. I wanted to use the img2img workflow for these renders to add texture, realism, and detail. How can I split this image so Comfyui can process it? The image shows the overall view and a detail.
Bath_elixir-1.jpg Bath_elixir-2.jpg
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,703
4,184
448
Hey guys, can you tell me what workflow/node I should be looking for? I was making 6000x6000 pixel renders. I wanted to use the img2img workflow for these renders to add texture, realism, and detail. How can I split this image so Comfyui can process it? The image shows the overall view and a detail.
View attachment 5351989 View attachment 5351990
Unless someone here has the exact answer to this relatively rare ask, your best bet is to go to Civitai and to search for upscaler workflow with the best fit, ie:

 
  • Like
Reactions: osanaiko

Sharinel

Active Member
Dec 23, 2018
609
2,440
448
Hey guys, can you tell me what workflow/node I should be looking for? I was making 6000x6000 pixel renders. I wanted to use the img2img workflow for these renders to add texture, realism, and detail. How can I split this image so Comfyui can process it? The image shows the overall view and a detail.
View attachment 5351989 View attachment 5351990
Ultimate SD Upscale would probably be helpful, it splits the overall image into squares to then upsclae. I use it to double the size of my images and add details to them by using the denoise value, but you could probably just make it upscale x1 and cause it to fill in detail instead?
 

art4vandelay

Member
Feb 2, 2025
148
963
152
Would anyone have any idea how this guy made this?

The details are next level - botox lips and forehead, uneven nipple, hair on her right (image-wise) triceps.

Is this a real girl actually? The author says these are "ai tools".

Thoughts?

Probably, he created it with krita (krita diffusion). It's like Photoshop and you can do a lot of things.
 
  • Like
Reactions: Sepheyer

JhonLui

Well-Known Member
Jan 13, 2020
1,146
1,136
284
Heh! I've heard of it, and now went to check it out.

Looks like a fukton of fun:

and this video in particular:
Wow... a photoshop interface for inpainting is just what I need to finally trash Gimp!
 
  • Like
Reactions: DD3DD and Sepheyer

theMickey_

Engaged Member
Mar 19, 2020
2,340
3,165
357
I've been using ComfyUI for quite a while, I've taught myself on how to create (simple) workflows to create images, upscale images, use ControlNet to use existing poses or even animations to reproduce this poses/animations with pre-defined images, do face replacements and all this stuff. All while using SD(XL) models. And it's pretty amazing what you can do, I love it! I even replaced my NVIDIA RTX 2080Ti with an NVIDIA RTX 4090 to be able to circumvent the limited VRAM of the 2080Ti. But that's when I stopped teaching myself new things. I don't know anything about Pony, Illustrious, Qwen or anything like that, I'm just seeing your posts and I'm... wow!

And now there's Image-2-Video and Text-2-Video using Wan 2.1/2.2, which I'm very interested in, but I'm totally lost! So here I am asking you guys if you can help me out.

First, I've started using the official Wan templates like "Wan 2.2 14B Text to Video" or "Wan 2.2 14B Image to Video" which can be found in the "Templates" section of the ComfyUI menu. I've downloaded the models for it, and while those workflows do work, they seem to be slow and "limited" when it comes to the length and resulution of the video.. And because you'll usually want to create a couple of videos with the same prompt to pick the best result, this might take many hours, maybe even a couple of days to get the video you're looking for. And it's only like 5 seconds long...

Next I was looking for some "optimized" workflows people share online, and first I found set of workflows on civit.ai, and I've been trying it out. I do like the "WAN 2.2 I2V" workflow included, because it seems to be faster and has more options, but I still feel limited to when it comes to the resolution and length of the video because it uses ".safetenors" models which uses a lot of VRAM. I can still get 5 seconds videos with a decent resolution, or I can get a longer video with a poor resolution.

Then I thought I might go for GGUF models instead, because from what I understand, they do use less VRAM, but they are "compressed" and therefore might take longer. I don't mind waiting a couple of minutes for results if I can use more frames (= longer videos) or a higher resolution than with the "default" workflow. So I found , which is very impressive, uses GGUF, has a bunch of options, and after downloading all the missing nodes and models (as well as fixing a "bug" in the workflow itself) it's producing decent results within a couple of minutes. I've been able to create a few videos of 20+ seconds (at 24 FPS) with a resolution of 480x800, but as soon as I add action prompts for the camera or the subject in the picture (btw: no additional LoRAs are involved), the video gets blurry (looks like a double- or even multi-exposures when talking about photographs) or it just doesn't follow the prompt (i.e. if the prompt says "the camera slowly zooms in toward the woman's face", it zooms-in for about 3 seconds, then zooms back out and repeats those steps until the end of the clip -- even if I add something like "at second 5, the camera stops completely and remains entirely static for the rest of the video. there is no zooming, panning, or movement after this point — the frame stays locked on her face.")

So here are my question:
  • What's your overall workflow to create a 10-20+ second high-resolution video based on your imgination/prompt?
    • The resulting video should be produced in a couple of minutes (5-15 minutes at most, not hours).
      • What's your Text-2-Image workflow you use to create your starting image?
      • What's your Image-2-Video workflow to produce a 10-20+ second video with a decent (720p) resolution?
      • What's your workflow to upscale the video to a HD resolution (1280p or even 1440p)?
  • What prompt (or LoRA) do you use to consistently "control" the camera movements (zoom in, zoom out, keep being static at a close-up etc.)
Any help is highly appreciated. I would love to end up with with like 3-4 workflows in total (1: create a starting/ending image for the video / 2: create an at least 10-20+ second video with "precise" camera movement / 3: upscale the video to at least 1280p).

TL;DR: if you share your workflows to create a 20+ seconds video with precise camera (and subject) actions, or are able to point me into the right direction where to research further, I will be in your dept forever :)
 
  • Like
Reactions: Luderos

JhonLui

Well-Known Member
Jan 13, 2020
1,146
1,136
284
Interesting..
Unfortunately your specs are out of my legue (provided you also upgraded to M2 , Ram and adequate cpu), yet I've seen very interesting workflows for Wan on Kemono, you may wanna look for them.
There are also "accelerators" on civitai labled as lora or checkpoints those include (link to) the workflow, but you should widen your search, as Wan is new therefor in constant development (get used to surf Github).

Also I'm sure the other guys will be able to help you.
Cheers