Your noob to expert guide in making great AI art.

5.00 star(s) 5 Votes

1nsomniac22

Newbie
Game Developer
Jul 16, 2025
37
49
27
Thank you for the guide, very comprehensive.
I will be trying out Invoke soon - the feature set seems quite complete (been dealing exclusively with Automatic1111, which is good 'nuff to get started).
I'll add one comment about Seed, Scheduler and Reproducibility:
Seed: this is the starting point, think of it as a randomized collection of different color pixels - each seed is different, but if you use the same seed you're starting from the same (latent) image each time.​
Scheduler: the component that is resolving your prompt and the seed into a recognizable image. Some schedulers have an -A in the name, they are Ancestral schedulers that add noise to the (latent) image at each sampling step - so there should be variation given the same prompt and seed (assuming you have enough sampling noise added).​
References available at
 

KordNTR

Member
Aug 7, 2017
156
447
217
Thanks for the guide, made me finally take the plunge to use Invoke instead of ForgeUI, finding it much more flexible, especially with a graphic design background, now they just need to make this a full suite like adobe.
 

KordNTR

Member
Aug 7, 2017
156
447
217
Solid part 7, being able to composite easily is extremely powerful. Feels a lot less like pulling a slot machine and more like being an art director. Using some sort of control net for backgrounds is sadly needed in a lot of the lewder models as they do not focus on backgrounds in their data set. Though I just end up using a different models for the character like you did.

Would love to see a positional guidance/pose control net tutorial. Workflow wise I think there is a lot of possibilities when combining AI with something like Koikatsu to get the poses and layouts that you want.
 
  • Like
Reactions: NoTraceOfLuck

1nsomniac22

Newbie
Game Developer
Jul 16, 2025
37
49
27
Congrats! Part 7 was excellent!
I have long since given up trying to get detailed and interesting backgrounds using NSFW models and simply resorted to compositing the images post rendering, which has it's own issues.
 

NoTraceOfLuck

Member
Game Developer
Apr 20, 2018
479
794
163
Congrats! Part 7 was excellent!
I have long since given up trying to get detailed and interesting backgrounds using NSFW models and simply resorted to compositing the images post rendering, which has it's own issues.
Agreed, I hope some day we get a model that can do both. Unfortunately I think the current limitations are due to the base SDXL model. I think we'll need something new and fresh to get past this.
 

not_a_user_69

Newbie
Aug 7, 2021
90
74
114
OK so I've been toying around a bit.

The BBox tool is indeed very helpful for details, hands, eyes, etc.

Another tool I discovered that is amazing is the noise amount, here:

View attachment 4978899

Then in your inpaint mask:

View attachment 4978901

This thing allows you to say "don't reset what's inside the mask completely by replacing it with 100% noise, instead add only 37% noise".

So you can adjust an image 5% of noise at a time, or 50%, etc., depending of how much you expect the image to be modified. This is excellent for cloths, hair, iterating positions, etc.

Notably it helps a lot with the monochrome patchs of color from hand-painted raster layer. Just iterate on with 50% noise at a time, that's how I did the legs' positions below.

Here are some results:

View attachment 4979191

There are problems, things I could clean. I don't really care at the moment about going too far, it's just to learn the main concepts.
Can I please know how did you achieve this?

Did you use any existing lora if so can you please share it?
Or did you create your own lora?

What model did you use?
How many images and time did it take to achieve this?
Any small references for the last images, like how were you able to achieve this?

I know these are dumb questions but I want to know what method you used and I will follow that to start creating AI images and once I have done, I will try to share the steps which I have found convenient, so please help me with answering my few questions.
 

KordNTR

Member
Aug 7, 2017
156
447
217
A of concept with using these Invoke techniques + Photoshop touch ups + Illustrator layout.

C001_000.png
 

KordNTR

Member
Aug 7, 2017
156
447
217
Can I please know how did you achieve this?
Basically used Illustrious XL with the Nova Anime Lora in Invoke, was a lot more time consuming than expected. Had to generate backgrounds separately in the first image then each character separately and used Invoke and low noise to unify the image. Second main image was a one off easy one to do. Third image was layering various images generated by Invoke. Note, I have a background in graphic design so there is a lot of this I was able to do because I have the training for layout and compositing. Logo was a AI one that I manually vectorized.

The final step was moving it all to Illustrator and making the speech boxes and SFX text.
 
  • Like
Reactions: not_a_user_69

not_a_user_69

Newbie
Aug 7, 2021
90
74
114
Basically used Illustrious XL with the Nova Anime Lora in Invoke, was a lot more time consuming than expected. Had to generate backgrounds separately in the first image then each character separately and used Invoke and low noise to unify the image. Second main image was a one off easy one to do. Third image was layering various images generated by Invoke. Note, I have a background in graphic design so there is a lot of this I was able to do because I have the training for layout and compositing. Logo was a AI one that I manually vectorized.

The final step was moving it all to Illustrator and making the speech boxes and SFX text.
Sorry for the delay, I kinda missed it.
Thanks for your reply.
Did you train any Lora for this? or just used existing ones?
Will you train Lora to keep character design consistent for future image generation?
 
  • Like
Reactions: KordNTR

KordNTR

Member
Aug 7, 2017
156
447
217
Sorry for the delay, I kinda missed it.
Thanks for your reply.
Did you train any Lora for this? or just used existing ones?
Will you train Lora to keep character design consistent for future image generation?
Probably a good idea to do, but I didn't I just kept my tags consistent for characters and it almost always keeps things looking the same. The real issue is clothing/armour consistency, very hard to achieve with anything complex and would need a lora.
 
  • Like
Reactions: not_a_user_69

HoudiniProd1

Newbie
Jun 6, 2024
34
29
95
When I try to download the most recent Wan model ((wan 2.2 experimental) WAN General NSFW model) on CivitAI I receive a error about "Model Install error Unknown Lora type". Anyone know how can I fix that?

I downloaded another but when I tried to generate a image I receive this error: "RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile wi..."
 
Last edited:

osanaiko

Engaged Member
Modder
Jul 4, 2017
3,354
6,441
707
When I try to download the most recent Wan model ((wan 2.2 experimental) WAN General NSFW model) on CivitAI I receive a error about "Model Install error Unknown Lora type". Anyone know how can I fix that?

I downloaded another but when I tried to generate a image I receive this error: "RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile wi..."
Debugging anything with these tools is a complex problem. I've got many many years of technical experience, yet the complexity of the software stack and tools seems almost insurmountable. The rapid evolution of the tools and the fact it is almost all amateurs doing it on the side means that there is fuck all chance of finding answers based on just error messages.

Therefore my personal choice (and recommendation to you) is to outsource: grab a pre-built package, such as stability matrix, or comfyui installer, or similar. Let someone else work out the demonic summoning rituals needed to get a working python virtualenv + cuda libraries + pytorch and its massive dependency tree... After that, find a recent youtube tutorial and follow it exactly to get your bleeding edge video models + workflow installed.
 
Last edited:
  • Like
Reactions: KordNTR

John Doe Jr.

Well-Known Member
Jun 11, 2017
1,465
2,866
479
/me errors out when trying to understand WTF you are talking about, and why you'd bother posting with no details which ensures no-one could help you.
Bc it didn't give any indication. Would download, look like it was fine, then just wouldn't show up but I got it figured it out now
 

BlueLover69-38

New Member
Dec 29, 2022
2
0
11
Hi,
Great tutorial exactly what i am looking for, very big thanks
i do have a problem if u can help me
i cant use inpainting
Screenshot 2025-09-01 150618.png
i dont know if i missed a step or i did something wrong but i cant figure it out
thanks in advance
 

NoTraceOfLuck

Member
Game Developer
Apr 20, 2018
479
794
163
Hi,
Great tutorial exactly what i am looking for, very big thanks
i do have a problem if u can help me
i cant use inpainting
View attachment 5208052
i dont know if i missed a step or i did something wrong but i cant figure it out
thanks in advance
Do you have in progress generations at the bottom of your screen? That's the only case I know of that causes this to be disabled: 1756738033157.png
 
5.00 star(s) 5 Votes