Which route is your favorite — or the one you’re most excited for?


  • Total voters
    22,512

Freedom9555

Member
Nov 7, 2022
387
295
178
Depends on what you’re trying to do. For example, Sora lets you “bring images to life.” I think they even offer a couple of free generations. Recently, WAN 2.5 came out — it can even handle some NSFW stuff if that’s what you’re into :)
If you mean image generation, there are tons of sites that give you a few free runs. I’d recommend Civitai — it’s a model hub where you can also generate stuff directly.
If you mean image editing, that’s called inpainting, and almost every model that can generate images can also edit them. One exception is the Flux family — as far as I know, it doesn’t have built-in editing because of how its generation pipeline works, though there are some workarounds. I haven’t looked into it too much — I don’t like LLM-based models; they’re too unpredictable.

If you want maximum freedom, install ComfyUI or another local interface - it's completely free. You’ll need at least a 6GB VRAM GPU for images and around 12GB for videos.
Though honestly, I’d say don’t even bother with video generation unless you’ve got at least a 4090 — anything less is just frustration. But that’s just my opinion :)

So you use Civitai to generate something and then use ComfyUI to make mature content out of it?
 

Agha

Newbie
May 25, 2018
47
22
18
There is only one thing dev have to change please is the text size some times it didn't understand what dialogue written please add a text size so player can adjust the text size
 

aDDont

Developer of Mila AI
Game Developer
Apr 20, 2020
652
4,418
376
I want to ask one question to game developer when are we expecting the next update??
Probably not very soon - many people asked to stack up more content between releases, so I probably stack up a couple more scenes first (I guess something like 5 at least). For now i've finished 0.5/5 :D - I wanted to train a lora for wan but failed miserably - it's came out to be a lot harder then I expected. Then I tried VACE rendering in hope to overcome the missing Lora limitations, and failed again, broke my workflow because of some nodes or dependencies I installed, then spend lots of time repairing my workflow, managed to make it even better, understood that I am mistaken, and finaly came back to work in more or less the same state as I was before the whole Lora training experiment. Lots of time and nerves wasted :(
 

Agha

Newbie
May 25, 2018
47
22
18
Probably not very soon - many people asked to stack up more content between releases, so I probably stack up a couple more scenes first (I guess something like 5 at least). For now i've finished 0.5/5 :D - I wanted to train a lora for wan but failed miserably - it's came out to be a lot harder then I expected. Then I tried VACE rendering in hope to overcome the missing Lora limitations, and failed again, broke my workflow because of some nodes or dependencies I installed, then spend lots of time repairing my workflow, managed to make it even better, understood that I am mistaken, and finaly came back to work in more or less the same state as I was before the whole Lora training experiment. Lots of time and nerves wasted :(
Thankyou for replying So Fast you are a really humble person
 
  • Heart
Reactions: aDDont

TheresiaW

Member
Jan 1, 2020
219
449
134
I use civitai to download loras and checkpoints if I need them. I use comfyUi and a1111 localy to generate pictures and videos)
Do you have a good step-by-step guide for beginners with no experience?

I downloaded ComfyUI, installed ComfyUI Manager, and downloaded a few models from Civitai that I think look good.
Nevertheless, when I try to create something, I either get error messages or rubbish.
 

aDDont

Developer of Mila AI
Game Developer
Apr 20, 2020
652
4,418
376
TheresiaW
Well I handle the errors by asking chatGpt what tf does that mean and what can I do to fix it :D It helps... Sometimes) It's hard to say what is wrong without context, and the fastest and easiest way to fix technical problems is to send logs to chatgpt. Most of the time it's either dependencies, either versions conflicts/missing nodes.

With comfy you can drag and drop generated images into your comfyUI window and it converts it into workflow graph, which supposedly will lead to the exact same image if you hit the button, but there are some weak points. For example my images won't work that way, because i don't use comfy for pictures (I use a1111, the oldest UI), and also because I generate them in iterations fixing stuff I don't like in photoshop. But you can find lots of pictures on civitai that will have the metadata. Also most of the models previews and model's info has the description about the prompt and params.

I would say just copy someone elses prompt and then fix the stuff you don't like. If it's not working - then go from the easiest part - there are templates in comfyUI (on the left). Find there txt2img template for sdxl and try running it as is. If that give you rubbish too - look at the log and send to chatgpt with the description what you've done and what is the problem)
 

Symbal

New Member
May 14, 2025
2
3
3
Do you have a good step-by-step guide for beginners with no experience?

I downloaded ComfyUI, installed ComfyUI Manager, and downloaded a few models from Civitai that I think look good.
Nevertheless, when I try to create something, I either get error messages or rubbish.
Check the checkpoints you downloaded, 9 times out of 10 if you are new, you are missing either a clip last layer restriction, or an activation prompt (if it's a LoRA).
I've noticed that most from civitai use -1 or -2, if you miss this (or another small parameter tweak) you will end up with just latent noise being thrown back at you.

Best suggestion is to view someone else's metadata on civitai, and copy the prompts and values over, and use the same linked checkpoints and loras identified.

If all else fails, use the pre-baked in templates, and make sure those run as-is, and then start tweaking those one value or parameter at a time to see what breaks.
e.g. Swap the model(checkpoint) only to get a new art style, change the values in the Ksampler (cfg, steps, sampler, scheduler, etc.) and see how that impacts the output, and go from there!

Actually knowing and learning what all the things do helps when you want to fine tune and tweak things, or deal with errors, so it helps to poke around.
 
  • Like
Reactions: TheresiaW
4.20 star(s) 125 Votes