How are you able to achieve this? Do you feed the picture to a program and it duplicates them with a prompt? i would like to try my hand at this, just for curiosity
No, I didn't use any images from KoD. I am using that Sabia Lora, but it didn't fully capture the "Washed out airbrushy" style Nomo does, so I had to start merging in other LORA's and used a different set of checkpoints that were less anime and had softer more blurry shading. (Although maybe people would say I didn't actually get the style right either
)
For the pose I downloaded a random image off google of a girl doing a peace sign then fed that into two control nets, OpenPose 2 (with finger bone rigs & facial expressions) and segmentation (which I kinda didn't need, I was doing that to add more characters then.. didn't
).
It's way easier than it sounds if your new to SD.. and even on my junky laptop It only took me 15 - 30 seconds to generate each image.
--
That Sabia LORA was actually tagged correctly to be a "style", although it's dataset is 100% Sabia images..
"ss_tag_frequency": {
"2_Sabia": {
"sabia": 199,
---
"2_Sabia": {
"n_repeats": 2,
"img_count": 199
}
it does define clothing and concepts to the CLIP so in theory it could be used to do any characters, not just Sabia.
I might try that next for fun.