When you do figure some of that out...I wouldn't mind having links to what tools and where you get things...but keeping in mind I "only" use offline generation for anything...substantial. So, yeah I would need the models and loras and kleywords and so on and so forth. If a group of you can get a consistent set of prompts that anyone can use, then it could open the doors for more images sooner with multiple creators.
Right, well can't speak for everyone who been working on the graphics (newbie myself) but I scanned through the posts & come up with the following info
SWP: has stated he uses Stable Diffusion (& yes this IS a standalone) with 'several models' but mainly Pony Diffusion and KIMXL. Using SD's 'PNG Info' I dragged A SWP 'Cream clothed' image & I see that he had used 'Lora: Pony Diffusion V6 - Smooth Anime ' and a LoRA 'Cream_the_rabbit'
Novaca: Stable Diffusion with nothing except NovaFurry checkpoint
MrFluffum: ForgeUI (basically Stable diffusion with a different interface afaik) with NovaFurryXL4 and a 'Age Slider LoRA' to create Cub pictures. He did post a
lengthy post as a 'simple' tutorial for prompts etc
Foxy: Stable Diffusion (A1111 version yet another UI) - currently experimenting with NovaFurryXL and ponyDiffusionforAnime 'checkpoints' - occasionally trying different LoRA models but trying to keep things 'simple' but not depending on them
As mentioned above - apparently you can copy some graphics into SD to 'get the prompts' & you can copy them into the 'Txt2Img' & click Generate, however unless you have exactly the same checkpoint/lora (Don't ask me what's the differences between checkpoint & Loras) you won't get the same picture
Using the prompt from SWP 'cream dress OK eyes up'
When using novafurryXL_Illusriousv9b
And with ponyDiffusionForAnime
Don't ask how there's extra character in the pics.. not 100% but I think it's to do with the resolution.. large resolution & SD adds characters
As you can see ponyDiffusion seems the closest to SWP style.
As for me - as I'm still learning I 'cheat'.
I go to Civitai (BTW I'm currently using VPN in Opera to do so atm thanks for the advice)
look at the pictures to try & see one that has the 'pose' I want
click the paintbrush icon (& hope it has 'Remix')
this then opens up the side panel & hopefully there's a 'prompt' there
I look through the prompt & try & figure out what causes the 'position' I want,
Copy that in SD with a brief description of the characters I want, then click generate' with random seed until I see one I like
I looked up about how to make your own poses as to not rely on other prompts & hope for the best, & many tutorials talk about 'controlnet' & 'openpose' but for some reason while I can get them installed & I get the 'wireframe' models showing, it doesn't work - the pose isn't applied when generating.
I'm also currently looking into something called 'regional prompter' that allows you to create region & can describe each region with a specific prompt (currently SD uses the whole prompt over the whole generation) & I may look at MrFufficum 'guide' about inpaint to see how to combine multiple pictures into one
So......
As a suggestions, to make things easier (simpler?!) I suggest we decide on a single checkpoint for the style,
Try NOT to use specific character LoRAs,
Create a set of full portrait images for each character (Naked/Clothed ?) using a simple prompt,
Generate several random one until you get a picture that seems oK - then make a note of the 'Seed'
(we're probably get better images of certain character with different seeds eg Cream could be better with 12345, while Blaze could be better with 76543)
THEN using the seed & same checkpoint we 'should' get repeatability for each character
Keeping it 'simple' could make it easier for multiple fans to generate consistent graphics even if they're newbies to AIgraphics