- Apr 21, 2018
- 474
- 530
I actually tried to generate different faces using Stable Diffusion; at this stage it's just a proof of concept, if anything; but my rough workflow appears to have a somewhat consistent result. However, it's not great, and I believe some more adjustment to the prompts will be needed to improve it (not to mention make them look better)LR2 wasn't designed using the typical practices that most other VNs follow. I stated a few pages back that it was designed as a game engine capable of adding/expanding on individual stories in a populated world fairly easily. Since it's times consume to hand craft 90 - 160 characters to make the world feel more "alive", Vren built several functions to randomly generate characters in a way that looks realistic enough, but definitely have flaws.
At this stage, Tristim and Starbuck are more focused on bug-fixing and converting unique character stories over to a template (love, lust, obedience storylines) that updating the character generation is low on the priority list. If you join their discord server Starbuck has started posting roadmaps of what she plans to accomplish each month.
The idea so far is to use unren to extract image.rpa, then unzip all the character image zip files. Starting with character's faces by going into each pose and take crop the faces from each face pose image (since the body is separate, the face image contents on other information except bland spaces)
Use OpenPose to extract face pose image and use depth to generate a depth map (for control of the face shape)
Then use these two maps in Automatic1111 with openpose as first slot and depth as second (logic being it's the facial expression that's more important).
Then input descriptions into prompt, for example, with the angry face
With Clara MA being a LoRA trained with images of Caroline from the game Milfy City, I'll try mixing it with other LoRAs later.Daz3D rendering of woman, (bald hair,no hair:1.3) mouth open, frowning, angry, blue screen, detailed face, frontal lighting <lora:caroline:0.65>
I do 2 batches of size 2 to generate 4 images, and use the results in GIMP to extract the face, matching the position of the original, and save as a new face. I will probably play around with color map at some point to see if that can fix the color inconsistencies.
Original
Test result (I will see if some image-to-image inpainting can fix the odd patches of hair later, and will have do something about the depth map which isn't optimal -- maybe using the depth map plugin instead of OpenPose depth map -- or try a different preprocessor). The SD model is also too biased towards realistic images, have to find another one
I will see if I can write a costume class that loads extra faces and body types if this is successful; I know there's one script for loading extra images outside the RPA file, so I'll be looking into that if the SD method turns out hopeful.