- Mar 28, 2020
- 651
- 1,576
So to clarify, I used Stable Diffusion WebGUI with NovelAI's model.It should be in the replies for the recent post (Also tried the portrait pack and think the girls are really cute!)
I'll also post the link here:
Mega
Lord Lewdenhall let me know if you'd rather post the link yourself and I'll just delete this one
Also if you don't mind me asking, how did you generate the bonus scenes for your characters? After hearing and seeing Hongfire Survivor's portraits using stable diffusion I thought it would be cool to make some scenes for my characters and use them as a concept and example for my own drawings, but haven't any luck thinking how to word the prompts and tags or even what images to use as an example. (Really new to this kind of stuff haha)
There's also some of the characters in your pack I would use as inspiration for some more chosen designs I need to make for pack, would you be fine with that? I'll properly credit you in my pack as well
You will not get the results as easily as I got without using NovelAI. They trained it on (most likely) hundreds of professional anime images from well maintained websites.
I didn't scrub any metadata from the pictures so if you are also using a webGUI, you can download the pics and toss them into img2img and grab the prompts I used.
For example:
This was generated using Novel AI's model:
Prompts:
masterpiece, best quality, portrait of a young dark skinned woman in a stylish archer outfit with medium hair, yellow eyes, honey hair, face, medium breasts, braids, ((torso focus)), ((((drawn by Tetsuya Nomura))))
In regular SD I get this with the same prompts with Anime added:
Depending on the model (checkpoint) used, they are biased towards certain content. SD is much better for realistic and more western style illustrations. Novel AI is heavy towards anime.
EDIT: Here is a great guide on Prompts from OpenAI (Dall-E 2's creators)
You must be registered to see the links