daddyCzapo
Member
- Mar 26, 2019
- 245
- 1,507
In Dio we trust !Let us pray..
View attachment 2623545
"Did I ever tell you about the time I made a $mill selling expired cheap Chinese eucalyptus to the Koala Bros?"
Very nice!View attachment 2625149
loving the ReV Animated checkpoint
so, serious question, why is there no dev using AI to make game yet?
let's say I have an idea for a game and i need someone with RenPy dev experience who should i talk to?
Right now combination of Daz+Photoshop+SD is the best, in my opinion. Control net is interesting, but I find that it's not particularly useful in my workflow, there are better ways of doing things. Daz render plus SD with a low denoising strength inside Photoshop provides far superior results in terms of consistency with the pose, facial features and of course hands. However right now the editing and composing is still required to produce a really good-looking result.Very nice!
I'm afraid I can't help with Ren'Py coding - maybe take a look in the Recruitment forum.
The main issue I see with using AI for a game is characters. To create a character you need consistency - you'd have to generate hundreds of images of a proto-character, train a LoRA or TI on the best 20-30 of those, add one costume at a time, perhaps training again at each stage.
Then you'd need to get the poses right (Controlnet would help here but isn't a panacea) and generate dozens of images for each possible pose. Or use DAZ to generate a base image and img2img it using the trained character model.
For a porn game you'd also need interaction between characters, which is were it all gets distinctly 'eldritch horror' as they meld together.
You'd probably need to generate them separately then either composite them in-game as sprites or in the AI tool followed by img2img.
It's a lot of work, with a workflow that would need constant tweaking - settings that would work great for one character would create garbage for another etc.
Someone will crack it (and soon) and they'll be showered with Patreon cash.
If you're going to make a real go of it, I wish you the very best of luck!
My bet is on Kirill Repin Art being first - he's got the AI skill, the coding knowledge, the artistic ability and the ambition. Unfortunately he's in Russia which is currently hampering his ability to monetise anything.
Time ago someone made a game with Ai generated image, the game is called AI simDate, made in renpy.View attachment 2625149
loving the ReV Animated checkpoint
so, serious question, why is there no dev using AI to make game yet?
let's say I have an idea for a game and i need someone with RenPy dev experience who should i talk to?
It's that very question that brought me back here today. I'd say the two issues are consistency and depending on your models and prompts, the propensity to venture off into ** territory. For instance, the attached images were based on the brainstorming of a game idea (or the aftermath of) I'm considering resurrecting and were made with the following prompts:so, serious question, why is there no dev using AI to make game yet?
let's say I have an idea for a game and i need someone with RenPy dev experience who should i talk to?
"Lets-a Go!!!"
While that approach may have some effect, it's not necessarily as precise as it could be since there's been no positive reinforcement of the model or textual embedding vectors. A likely better approach would be to render out a set of images that consistently depict the characteristics you would like to encode for and then reinforce those characteristics through training. Koiboi posted a pretty good overview video on YouTube concerning this:I reveal one secret from my workuse it!
The AI is coded with habit to 'know' everything. When you prompt it: 'woman Britney Spears' it checks memory and found this blonde woman in training data, it found face shape, nose, eyes, body size and shapes, etc. and draw one blond woman looking like this persona. But when you prompt: 'woman Sharaklata Abarubas' is looks at data set and found nothing (such a woman does not exist), then it work on plane B, searching and compering with people with similar name and try to invent race, outlook ,etc. fitting this NAME and finally draw this invented supposed woman. Model use same neuron path to invent this persona everytime (90%+) Then if you like her, save her name and generate images with your own new woman![]()
Bang on!While that approach may have some effect, it's not necessarily as precise as it could be since there's been no positive reinforcement of the model or textual embedding vectors. A likely better approach would be to render out a set of images that consistently depict the characteristics you would like to encode for and then reinforce those characteristics through training. Koiboi posted a pretty good overview video on YouTube concerning this:
You must be registered to see the links
Edit: Attached a few more renders, fully aware that there are some technical flaws in some of them, still working on prompt engineering.
Those areolae are perfect!
I know, right! I tried chaining upscale using a cascade approach: 512>768>1024>1536 and the details apopping.Those areolae are perfect!