Ren'Py Sucker Punch [v0.1.1] [Insomnia Syndicate]

4.00 star(s) 1 Vote

absent0

Newbie
Aug 31, 2019
26
73
147
Fair critique, thank you. I'll take a look at fixing those (and other) images before the next release. I'm discovering that the Invoke AI tool is much more powerful than the Automatic 1111 UI I started rendering with. But I'm still going through a learning process... and I'm afraid some of the basemodels I selected are kind of limited.

And, per deneegnetoo, yes AI generated images can suck. They can be just as bad as... well, pretty much every other computer aided imaging technique, just in different ways. If I had the talent and the time, I'd hand draw everything... but I most certainly don't have either.

I will keep working to clean up and improve the art. Hopefully it's not too much of a barrier to enjoying the story.

I've been using invokeAI for a long time, almost since the beginning, and I recommend two things:

1- Forget about SD1.5 (from the images, I would say you are using an SD1.5 model, correct me if I'm wrong) and use a better SDXL model (or, if your PC allows it, reduced flux or something similar). I recommend models such as ponydiffusion or ilusiondiffusion. There are also some good ones for realism. But seriously, leave SD1.5 aside. It may be faster, but the results leave a lot to be desired (depending on the model, of course) and fixing the details is a real headache.

2- Learn how to use the invokeAI canvas. It will make your life much easier (although it's not always easy to master...). It has many interesting features (such as making a simple drawing and then rendering it, or layer selection, etc.). Seriously, it's not easy to learn, but once you master it, it's the best there is.

As an optional tip, you also have tools like Krita (open-source “Photoshop”) where you can add a ComfyUI plugin to generate images directly in the program (although it doesn't handle layers very well... it's more for generating and making changes all in the same layer).

Anyway, good luck with the tools and the game.
 

1nsomniac22

Newbie
Game Developer
Jul 16, 2025
37
49
27
I've been using invokeAI for a long time, almost since the beginning, and I recommend two things:

1- Forget about SD1.5 (from the images, I would say you are using an SD1.5 model, correct me if I'm wrong) and use a better SDXL model (or, if your PC allows it, reduced flux or something similar). I recommend models such as ponydiffusion or ilusiondiffusion. There are also some good ones for realism. But seriously, leave SD1.5 aside. It may be faster, but the results leave a lot to be desired (depending on the model, of course) and fixing the details is a real headache.

2- Learn how to use the invokeAI canvas. It will make your life much easier (although it's not always easy to master...). It has many interesting features (such as making a simple drawing and then rendering it, or layer selection, etc.). Seriously, it's not easy to learn, but once you master it, it's the best there is.

As an optional tip, you also have tools like Krita (open-source “Photoshop”) where you can add a ComfyUI plugin to generate images directly in the program (although it doesn't handle layers very well... it's more for generating and making changes all in the same layer).

Anyway, good luck with the tools and the game.
Thanks for the suggestions! I'm definitely learning as I go and recently I've started leveraging some of the ControlNet tools in the canvas - total game changer! I'm hoping it shows in the next release with better artwork. Also, I've just recently upgraded my hardware... still have some technical problems to resolve, but I've now got 16G VRAM instead of the 8 I had when rendering out ch1. I've been thinking of trying out flux... but I need to see how that affects the LoRA's I'm committed to using.

As, for basemodel I've been using SymPonyWorld for the characters (it's what the LoRA's were trained against), which being a Pony derivative, understands most of the sex tokens... and is for shit for anything non-sex.
For the backgrounds, I was using base SD, or SD 1.5 - which as you correctly identify is not amazing. Recently I've been using a (probably again SD 1.5 derived) architecturally trained model for the BGs that seems to be much better. I'm working on replacing many (most) of the BG's that shipped in ch1 with updated images.

Again, thanks for the suggestions and comments.
 

absent0

Newbie
Aug 31, 2019
26
73
147
Thanks for the suggestions! I'm definitely learning as I go and recently I've started leveraging some of the ControlNet tools in the canvas - total game changer! I'm hoping it shows in the next release with better artwork. Also, I've just recently upgraded my hardware... still have some technical problems to resolve, but I've now got 16G VRAM instead of the 8 I had when rendering out ch1. I've been thinking of trying out flux... but I need to see how that affects the LoRA's I'm committed to using.

As, for basemodel I've been using SymPonyWorld for the characters (it's what the LoRA's were trained against), which being a Pony derivative, understands most of the sex tokens... and is for shit for anything non-sex.
For the backgrounds, I was using base SD, or SD 1.5 - which as you correctly identify is not amazing. Recently I've been using a (probably again SD 1.5 derived) architecturally trained model for the BGs that seems to be much better. I'm working on replacing many (most) of the BG's that shipped in ch1 with updated images.

Again, thanks for the suggestions and comments.
I'm glad my advice is useful to you. I enjoy helping people who are just starting out with generative AI imaging and are excited about it. If you ever need any advice or have any questions about invokeAI, let me know via DM.

In any case, I highly recommend checking out the invokeAI YouTube channel, where they often show many techniques using the canvas. You'll learn a lot there (although it's all in English, if you're not a native speaker, there are always subtitles, which help a bit—they certainly helped me, haha).

Regarding what you mentioned about ControlNet, I'm not saying it's not a good tool, in fact, it is, but I have to tell you that it's very good for helping you do things in SD1.5, but not with SDXL, mainly because the generation times increase dramatically. Let me give you my personal example.

My PC isn't anything special. I have a 1080gtx, and I can still manage quite well with that. However, an SDXL image at 1024 (I don't normally generate at that quality, I usually do it at 800 or so) can take me about 2 minutes, but if I apply a ControlNet filter, it can take me 17 or 20 minutes. That happens in SDXL, but it didn't happen to me in SD1.5.

For this reason, I personally (due to the technical limitations of my hardware more than anything else) prefer to use other canvas techniques in invokeAI, such as painting directly and then generating (if using the brush, there are two types: a paint brush and a selection brush for masks). I also use layers a lot, as there are certain layers where you can simply select something and then apply a prompt (positive and negative) just for that selection.

Anyway, I'm rambling on too much. This tool goes beyond the simple inpainting that A1111 has. It's very powerful, so I recommend watching videos to learn and experimenting a lot to see the trial and error.

By the way, you mentioned in a post that you stopped using A1111. I recommend that you don't abandon it completely, as it has useful tools that InvokeAI doesn't have yet. For example, although you can use a smart selector in Invoke, it takes me a long time, and to delete backgrounds I use a tool that is in A1111 (well, I use ForgeUI).

Since I'm talking about ForgeUI, if you're interested, it's a clone of A1111 but they made it lighter, and it makes image generation faster than in A1111, although the original hasn't been updated since 2024 (if you're interested, someone has taken over, search for a GitHub for sd-webui-forge-classic by Haoming02).

And why am I telling you not to delete A1111 (or forge)? Because you can always complement them in some way, or even when some lora doesn't work for you in invoke (it's happened to me), it may work in A1111.

I won't ramble on any longer. Enjoy experimenting, because these tools are exciting if you know how to use them (even though there are Luddites who hate them for no reason), and good luck with your project.

PS: English is not my native language. I translate everything with Deepl. I hope everything is understandable and there aren't too many errors in the translation from Spanish to English.

PS2: I almost forgot, when you use a lora or checkpoint, check the licenses carefully. There are some checkpoints, such as flux, that do not allow commercial use. Depending on whether the game will be free in its final version or if you want to take it to a platform such as Steam, this may or may not be something you need to worry about. I'm mentioning it so you don't have any surprises later on. So take a good look at it.
 
4.00 star(s) 1 Vote