sundevil1212

New Member
May 1, 2021
4
2
13
I'm a bit of an AI noob. I have a Geforce RTX 5060 and it appears like the version of Forge won't work with my card. Is there an alternative installation I can do? I was able to install it correctly and launch it, it just can't generate any images and I noticed it said it's incompatible with my version of PyTorch on the startup. I have been able to use ComfyUI with ease up to this point, but decided I'd try getting this game to work with the GenAI mod. Any help would be greatly appreciated.
 
  • Like
Reactions: doubledude123

Selek

Member
Aug 1, 2019
133
69
86
I have a Geforce RTX 5060 and it appears like the version of Forge won't work with my card.
I am an AI newb too. As I understand it, Forge is a fork of Stable Diffusion, not quite the same thing. I think GenAI requires Stable Diffusion itself. If you download GenAI from its Git (or find it from posts in this thread), you'll find an .html file that explains how to set up Stable Diffusion and then the game itself. Essentially, you install SD; download a model and install that; revise the SD .bat file with command line parameters designed to link it to LR2; run LR2; and click on the GenAI link at the bottom of LR2 to set it up.

Incidentally, I dimly recall reading somewhere that 50-series GPUs might require an additional step for some AI generation? I don't know.
 

NaughtyAnon

Newbie
Apr 2, 2020
44
36
152
I am an AI newb too. As I understand it, Forge is a fork of Stable Diffusion, not quite the same thing. I think GenAI requires Stable Diffusion itself. If you download GenAI from its Git (or find it from posts in this thread), you'll find an .html file that explains how to set up Stable Diffusion and then the game itself. Essentially, you install SD; download a model and install that; revise the SD .bat file with command line parameters designed to link it to LR2; run LR2; and click on the GenAI link at the bottom of LR2 to set it up.

Incidentally, I dimly recall reading somewhere that 50-series GPUs might require an additional step for some AI generation? I don't know.
So between both SD Forge and AUTOMATIC1111 SD WebUI there are issues with the NVidia Blackwell architecture (all 50XX cards) that are still being fixed. On both core repositories there are a lot of steps to try and fix things but it is still technically a dev branch.
 
  • Like
Reactions: doubledude123

sundevil1212

New Member
May 1, 2021
4
2
13
I am an AI newb too. As I understand it, Forge is a fork of Stable Diffusion, not quite the same thing. I think GenAI requires Stable Diffusion itself. If you download GenAI from its Git (or find it from posts in this thread), you'll find an .html file that explains how to set up Stable Diffusion and then the game itself. Essentially, you install SD; download a model and install that; revise the SD .bat file with command line parameters designed to link it to LR2; run LR2; and click on the GenAI link at the bottom of LR2 to set it up.

Incidentally, I dimly recall reading somewhere that 50-series GPUs might require an additional step for some AI generation? I don't know.
Okay this is super helpful. I will go back to the drawing board since it appears from the other reply that Forge might not be compatible with the 50 series yet (or rather without complex alterations that I don't know if I'm skilled enough to do). I think if I can change my port to the right one on ComfyUI and add in some of the other alterations I might be able to get it to work. I'll update this if I'm able to get anywhere with this.
 

sundevil1212

New Member
May 1, 2021
4
2
13
Okay this is super helpful. I will go back to the drawing board since it appears from the other reply that Forge might not be compatible with the 50 series yet (or rather without complex alterations that I don't know if I'm skilled enough to do). I think if I can change my port to the right one on ComfyUI and add in some of the other alterations I might be able to get it to work. I'll update this if I'm able to get anywhere with this.
Okay, got it all working. First I was able to get Forge working by updating the CUDA 12.9 using this link, plus ChatGPT to help me uninstall and install:

Second, I had the API issue. I had put in everything correctly, but I didn't get rid of the "rem" which was a comment out. Once I got rid of that, it worked flawlessly. Thank you all for the help from previous pages!
 
  • Like
Reactions: NaughtyAnon

Selek

Member
Aug 1, 2019
133
69
86
Okay, got it all working.
Congrats! Glad to hear you got it working.

Now, is there a way to alter the size or placement of the images in LR2? The folks on Discord tell me to alter the GenAI code to do this, but I am not skilled enough coder to do this, and I don't see a way to do it using the GenAI configuration menu in-game.
 

zalamander

Newbie
Oct 2, 2017
19
16
178
Guys, the GenAI mod only works with AUTOMATIC1111 version of Stable Diffusion. Forge changed a bit the API, so certain things stop working at times, and SDNext uses a slightly different way to use things.

Also, some clothing confuses the AI generator so black screens or monsters are to be expected. I'm attaching a bastardized version (a.k.a. modded by me with some fixes to that.). Don't expect any support for them.

V1 is the some clothes bugfix only. Some clothes work fine, some others work randomly, and some doesn't work at all (cop dress doesn't) and dresses in particular are a difficult issue, but the original also has them, the logic is not consistent ,and from a coding standpoint, is kinda hard to follow, so no support for it from me. Maybe some day will do my own version of this, but not now. In case of doubts , just try this. NO SUPPORT!!

V2 has two added requirements: requires ADETAILER plugin for Automatic1111, and it's model "hand_yolov8s.pt" (usually, installing the plugin from the UI browser will install all of adetailer models). This is done in a try to fix the hands issues, but is not perfect, tho at least in most cases will work ok. And requires the yogis ( Stable_Yogis_PDXL_Negatives2-neg, Stable_Yogis_PDXL_Positives2) text inversors installed in the SD installation. ONLY FOR PEOPLE WHO KNOWS WHAT ARE DOING. AND NO SUPPORT!!. In doubt, use the V1 or just ignore this post entirely, and move on ;).

Hopefully, some hero could redo the models of the game. One can hope ;)
 
Last edited:
May 23, 2023
266
184
109
Also, some clothing confuses the AI generator so black screens or monsters are to be expected.
I find it's all over the shop depending on what models you use. A good example is that few models know what 'Capris' are and so render the girls wearing them as bottomless.

I also find most models will spit the dummy beyond a certain level of prompt complexity, giving you monsters or splatterings of colour only Jackson Pollock would see as women.

I've now tried to customise AnotherMike's GenAI to several different models. StableYogi ones tend to give the best sex positions (I'd recommend realismByStableYogi_ponyV3VAE and his Porncraft models) but tend to fall short on clothes, appearances of women and backgrounds. Cyberrealistic ones are fairly good all round but are more likely to generate monsters. Both have their own sets of positive and negative prompt files (textual inversions) but they don't fix everything.

ADetailer can fix a few minor glitches (e.g. hands) and can help a lot with faces - especially when they're upside down in the piledriver position - but I'm finding it more than doubles image generation time, so when actually playing the game instead of messing with images you end up clicking through most of them and missing it. Ten seconds per image is about as far as my patience stretches and I like to keep it close to five seconds.

When I get a bit of time I'm going to start training a model to specific prompts I'll use to customise the GenAI code, but that's a fairly long term project.

Here's the tweak of AnotherMike's prompt builder I'm currently trialling with and its and prompt embeddings. Right now I'm trying to make 'kneeling oral' work consistently. I'm using the 'Euler a' sampler, with 25 steps, CFG 9 (!) and 0.35 denoising with both img2img and basic prompts turned off. Mostly I leave upscaling and ADetailer off but sometimes switch them on for sex scenes.
 
Last edited:

zalamander

Newbie
Oct 2, 2017
19
16
178
True. Adetailer adds more time (tho, i cheat a lot here: i game on a notebook, but i do generate the images on a desktop PC). SDnext has a better detailer, but genai is not compatible. For clothing and placing (yup, places can also bork the generation) is better to keep simple, but the game models are anything but that.

Maybe sometime soon i will code a better parser for generating images, but no promises tho.
 
May 23, 2023
266
184
109
Maybe sometime soon i will code a better parser for generating images, but no promises tho.
If the idea is just to simplify the clothing prompts fiddling the xmls in the wardrobes directory would probably be easier and more effective, if a bit tedious.
 

doubledude123

Newbie
Sep 7, 2021
21
40
112
my install of SD denies me install of extensions from the WebUI . What do I have to do here? I tried to add --enable-extensions but it does not work
 
May 23, 2023
266
184
109
my install of SD denies me install of extensions from the WebUI . What do I have to do here? I tried to add --enable-extensions but it does not work
There's a few things that can do that, including a corrupted SD install (very easy to do with various addons and updates getting out of synch), but the first thing to check is webui-user.bat. If it's got "--listen" in the COMMANDLINE_ARGS (as per AnotherMike's installation instructions) you need to delete "--listen" and restart webui.

Note that "--listen" enables network connections to the webui server, so if you're running the game from a laptop connected to a desktop server via wi-fi or something you'll need to reinstate it after you've installed the extensions. Preventing SD changes while "--listen" is enabled is a security measure to stop people hacking into your server via wi-fi.
 
  • Like
Reactions: doubledude123

Selek

Member
Aug 1, 2019
133
69
86
I've now tried to customise AnotherMike's GenAI to several different models.
Do you try different model checkpoints in the same saved game? This doesn't confuse GenAI or anything? So far I've kept each model confined to its own saved game. I've tried DreamShaper XL (my favorite so far), Gonzales (pretty good, and defaults to a "tall" rectangle that fits nicely in LR2), and Lifelike Diffusion (my least favorite -- maybe too lifelike, as the models just didn't seem very pretty to me). I still intend to try Juggernaut XL, which is recommended by the dev of GenAI, but it recommends 30-40 steps, which seems like it might be slow for the game. I'll try it eventually. For now I'm really enjoying DreamShaper XL.

One general question: the GenAI mods say we can click 'generate new image' every time we get the message that the model has changed. Do you all click 'generate new image' every time? Should I? Sometimes this seems unnecessary; for example, the very last image before an interaction ends, often I'll see 'the model has changed,' but if I click to change the model, the scene ends before I can see the new version. Also, on other occasions 'the model has changed' comes after the model has, say, turned her back to me, using images I've already generated in the past. I'm often reluctant to generate new images in these cases, when the game is already working as expected.

A good example is that few models know what 'Capris' are and so render the girls wearing them as bottomless.

I also find most models will spit the dummy beyond a certain level of prompt complexity, giving you monsters or splatterings of colour only Jackson Pollock would see as women.
I haven't tried capris yet, but I have started building custom outfits, and I'm pleasantly surprised how well GenAI handles them so far. I've seen only the occaisonal monster, and usually regenerating fixes it. Sometimes I have to append a few positive or negative prompts.

Incidentally, to answer my own question earlier: if you click on the image in-game, it enlarges, overlaying the game. The GenAI documentation says this clearly, and I just missed it!
 
May 23, 2023
266
184
109
Do you try different model checkpoints in the same saved game? This doesn't confuse GenAI or anything?
Yeah, I do. It confuses GenAI a bit in that it sometimes throws up old images from the previous model but you can use the 'X' in the 'character has changed' interface to stop them coming up (though it doesn't work perfectly). The main problem I find with switching up models in the middle of a game is bloat of the 'generated_images' folder. If you just delete the images every time you change models you get 'image not found' messages which are themselves images that repopulate the folder. 'X' doesn't delete the images, just the hash causing them to be redisplayed when the prompts match.

it recommends 30-40 steps, which seems like it might be slow for the game
One of the criteria I use for models/samplers is whether they produce reasonable images with 20-25 steps. It's interesting to go all out to produce the best images but ultimately I'm after an entertaining game experience and waiting over 5 seconds for each image degrades that. If I had a more powerful rig I'd probably use 30+ steps every time.

I like the general image quality and response rate of Dreamshaper and Gonzales models but find them limited in portraying sex positions, which is why I lean more to StableYogi and Cyberrealistic. I tried Juggernaut but abandoned it fairly quickly, though I no longer remember why. I've tried so many models my memories of them are blurred.

When I start training a model specifically for LR2 I'll probably start with Dreamshaper or something similar and train it up with sex position images, but I've still got a fair bit of trial and error to go before I get that far.

One general question: the GenAI mods say we can click 'generate new image' every time we get the message that the model has changed. Do you all click 'generate new image' every time?
When I'm playing I usually leave 'Auto Generate' on so don't have to worry about it, but when I'm micromanaging image changes for modding and testing I switch it off and click "+" every time I see 'character has changed'. Auto doesn't update every time the image changes.

But yeah, the sensing and generating of new images is glitchy - as is reusing previously generated images - and something I plan to fiddle with later.
 
May 23, 2023
266
184
109
A bit of a glitch.

After going home from Downtown Distillery with Stephanie and Nora on a Saturday night there's the sex and trance hypnotism, after which you go home to bed and the girls stay at Nora's. Except they don't.

You go home, but not to bed. The day doesn't end. The girls return to the bar where you can choose to join them.

Here's Nora back at the bar after the sex & drugs party at her place. You'd expect a little more public decorum from a senior university professor.
 
Last edited:

Selek

Member
Aug 1, 2019
133
69
86
One of the criteria I use for models/samplers is whether they produce reasonable images with 20-25 steps. It's interesting to go all out to produce the best images but ultimately I'm after an entertaining game experience and waiting over 5 seconds for each image degrades that. If I had a more powerful rig I'd probably use 30+ steps every time.]
I have a pretty new rig, 96G RAM, nVidia RTX 4090 with 24G VRAM, but even just 6 steps takes about 5-6 seconds using Dreamshaper. (The dev recommends 4 steps by default.) Is there any downside to increasing the number of steps, other than wait times? Do more steps help prevent things like extra limbs or deformed hands?

When I'm playing I usually leave 'Auto Generate' on so don't have to worry about it, but when I'm micromanaging image changes for modding and testing I switch it off and click "+" every time I see 'character has changed'. Auto doesn't update every time the image changes.
I've started leaving Auto Generate on, and I like it -- I sometimes miss new images when I'm playing without it. But you're right, sometimes Auto misses changes in the character, in which case I'm inclined to generate a new image. Especially in sex scenes, heh.
 
May 23, 2023
266
184
109
I have a pretty new rig, 96G RAM, nVidia RTX 4090 with 24G VRAM, but even just 6 steps takes about 5-6 seconds using Dreamshaper.
My rig's fairly beefy but not as much as yours - i5-14600k processor, 32GB RAM, RTX 4070 Super with 12GB - but I'm still getting 20-25 steps in around 5 secs with most model-sampler combos and ADetailer and Upscaling turned off. Have you got xformers installed and activated in COMMANDLINE_ARGS?
 
Last edited:
May 23, 2023
266
184
109
Does anyone know WTF those strings stitched into the corners of her mouth are?

They're too well matched to the rest of the pic to be artifacts and there's nothing in the prompts to explain them.

Seems Cyberrealistic models are trained on some pretty exotic images.
 
  • Haha
Reactions: doubledude123
4.60 star(s) 79 Votes