SDN111

Member
Sep 26, 2023
155
110
How does one get to tier 4 research without cheating?
There is no tier 4

We start at tier 0
Tier 1 is just Research director (Stephanie)
Tier 2 is again Research director. Depending on if you are on dev branch or current release you either have competitions bimbo formula bought or that and alternatively Nora quest (current release only).
Tier 3 is again Tesearch director and having 4 characters who are shutting enough and obedient enough to unlock.

Nora has her own traits to give you. Current release has multiple different interview traits which require you to interview characters fitting certain roles and stats to unlock individually.

Dev build seems to have scraped the previous traits with the new Nora Story and we now have unstable traits which come from Nora. They become better as you master them and they in general at max level are better any of the traits from the interview traits aside from not having an equivalent to Natural Talent. Aunty's potential also does not have an equivalent but honestly is not really needed.
 
  • Like
Reactions: Shadow Song

ZarakiZ

Member
Nov 1, 2019
251
236
That is exactly the plan, as well as adding cheat menu story manipulation. Did we break save games (again)? Fast forward Jennifer's sluttiness path to step 4. Miss her Office Slut storyline? Rewind to the beginning and do it again. Also looking for drag and drop mod storylines that work with existing save games.
Any plans to add the option of customizing player font color and UI color? Also any plan for dubious consent or non consent stuff like rape, because for a game with drugs this feels tame, I realize it's corruption but still.
 

EscapeEvade

Member
Nov 23, 2017
491
312
That is exactly the plan, as well as adding cheat menu story manipulation. Did we break save games (again)? Fast forward Jennifer's sluttiness path to step 4. Miss her Office Slut storyline? Rewind to the beginning and do it again. Also looking for drag and drop mod storylines that work with existing save games.
That would be amazing to have in this game.
 

ycbelane

New Member
Jul 28, 2023
13
40
I have been playing with the GenAI mod posted above, and tried to modify it to use img2img instead of text2img generation based on the original images in the game, which turned out to be relatively simple to do, and yielded pretty decent results - it's not perfect and required some prompt tweaking, but overall the quality of the graphics is improved IMO, while keeping a lot of the charm and character of the original. Below are some examples of original images, and what StableDiffusion was able to generate from them:

Abbigail1.jpg Electra1.jpg Jennifer1.jpg Lily1.jpg Lily2.jpg Lystra1.jpg Miyu1.jpg Sheyla1.jpg Charlotte1.jpg Sierra1.jpg

I feed character name into the prompt, and its surprisingly consistent in the face it generates for each character, while also being able to generate diversity of types and faces. Using img2img also more closely follows the outfits (and poses as well) in the game, where the text2img tended to generate its own outfits based on the prompt, which were not always very similar, and kept changing with poses. Each approach of course has its advantages, well setup text2img can also be fun to play.

Also I have been using SDXL Lightning models to do this, which can generate usable results in as little as 4 steps, much faster than regular models. So far my best results were obtained with using the LCM sampler, 7 steps (4-8 works), 1.75 cfg (1-2 works), 0.5 denoising strength (can vary this number to stick closer to the original, or give SD more freedom to deviate) and 768x1152 resolution, and also using ADetailer for face improvement.

The Lightning models work really fast, I can generate an image in a few seconds, which makes this much more usable when playing the game. Regular SDXL models that require 20-30 steps take significantly longer, slowing down the flow. Some other models I had good luck with were , and

My focus also was mostly on realistic images - have not played much with more stylized models, anime, etc.

If anyone is interested in the modified GenAI mod files, I can post them. It's a bit hacked together right now (was just messing with it for personal use), but the changes are relatively simple - anyone who got the a1111 API to work in the game, and has any basic python knowledge could easily get this going as well.

It could probably be cleaned up and incorporated into the mod at some point (maybe the author of the mod can consider adding img2img as an option)
 
Last edited:
Sep 21, 2019
236
196
That is exactly the plan, as well as adding cheat menu story manipulation. Did we break save games (again)? Fast forward Jennifer's sluttiness path to step 4. Miss her Office Slut storyline? Rewind to the beginning and do it again. Also looking for drag and drop mod storylines that work with existing save games.
Damn! I haven't followed the git for a while. Now I am interested!
 

a1fox3

Loving Family Member's
Donor
Respected User
Aug 8, 2017
24,664
17,155
Will you be explaining why, or are we supposed to guess?
If you follow it there is always bugs after an Git update and it takes several weeks to get fixed.
When an update is ready it gets release here on F95, any updates at Git normally are pre-beta and needs time to get fixed.
 

AccidentalGadgeteer

Active Member
Oct 8, 2020
717
952
If you follow it there is always bugs after an Git update and it takes several weeks to get fixed.
When an update is ready it gets release here on F95, any updates at Git normally are pre-beta and needs time to get fixed.
I know that, but what I don't understand is what's wrong with the past few days specifically? Or why it needed to be shared in such a vague manner.
 

Theemanx

Newbie
Aug 9, 2023
23
3
I have been playing with the GenAI mod posted above, and tried to modify it to use img2img instead of text2img generation based on the original images in the game, which turned out to be relatively simple to do, and yielded pretty decent results - it's not perfect and required some prompt tweaking, but overall the quality of the graphics is improved IMO, while keeping a lot of the charm and character of the original. Below are some examples of original images, and what StableDiffusion was able to generate from them:

View attachment 4812057 View attachment 4812058 View attachment 4812059 View attachment 4812060 View attachment 4812063 View attachment 4812064 View attachment 4812065 View attachment 4812066 View attachment 4812110

I feed character name into the prompt, and its surprisingly consistent in the face it generates for each character, while also being able to generate diversity of types and faces. Using img2img also more closely follows the outfits (and poses as well) in the game, where the text2img tended to generate its own outfits based on the prompt, which were not always very similar, and kept changing with poses. Each approach of course has its advantages, well setup text2img can also be fun to play.

Also I have been using SDXL Lightning models to do this, which can generate usable results in as little as 4 steps, much faster than regular models. So far my best results were obtained with using the LCM sampler, 7 steps (4-8 works), 1.75 cfg (1-2 works), 0.5 denoising strength (can vary this number to stick closer to the original, or give SD more freedom to deviate) and 768x1152 resolution, and also using ADetailer for face improvement.

The Lightning models work really fast, I can generate an image in a few seconds, which makes this much more usable when playing the game. Regular SDXL models that require 20-30 steps take significantly longer, slowing down the flow. Some other models I had good luck with were , and

My focus also was mostly on realistic images - have not played much with more stylized models, anime, etc.

If anyone is interested in the modified GenAI mod files, I can post them. It's a bit hacked together right now (was just messing with it for personal use), but the changes are relatively simple - anyone who got the a1111 API to work in the game, and has any basic python knowledge could easily get this going as well.

It could probably be cleaned up and incorporated into the mod at some point (maybe the author of the mod can consider adding img2img as an option)
Please do post them. Looks very interesting.
 

a1fox3

Loving Family Member's
Donor
Respected User
Aug 8, 2017
24,664
17,155
I know that, but what I don't understand is what's wrong with the past few days specifically? Or why it needed to be shared in such a vague manner.
What part do you not understand? " it takes several weeks to get fixed "
If you want to help test and find the bugs go for it but I always wait at least a month before I even think about it.
 

ycbelane

New Member
Jul 28, 2023
13
40
Please do post them. Looks very interesting.
Here is a snapshot of the img2img version of the GenAI mod (attached zip file), based on version 0.31 (I don't use Discord, so not sure if this is the latest one) I added a few options in the config to switch between img2img and text2img, and to set the denoising strength. The settings I currently use with the DMD2 Real Dream model are this:

config.jpg

I recommend also playing with the denoising strength - a low value like 0.3 will stick more closely to the original graphics/clothing (the ADetailer will still "fix" the face, which can be an improvement) while larger numbers will yield more creative interpretation of the scene/prompt, and possibly more realistic image, but things can also get a bit weird, especially as you get closer to 1 - I would stay below 0.7

Note that the prompt generation is tweaked to work best with img2img, so if you turn img2img off, you might get different results. You can add things to the "Prompt Style" for additional prompting, or edit the python code for more custom prompt construction.

This is still mainly a proof of concept, no a polished product by any means - the prompting needs more work still. Each model has its own optimal settings (sampler, steps, cfg) and handles the prompting differently as well, so its always worth playing around with that if you want different results, but at least this can be a starting point.

There are also situations the mod doesn't handle yet (displaying multiple people at one time ?) and things can also get out of sync when lots of images are being generated in rapid succession, and the wrong image is generated or image skipped/lost. Sometimes generation just stops for me too, and requires a reload (Shift-R in dev mode) and occasionally the game just crashes - not sure if due to bugs in this code, or something else - so beware and YMMV.

Also, model switching from the GenAI mod config screen might not currently work (I commented out some stuff that was giving me issues, but didn't have time to debug) - I use the a1111 Web GUI to switch the model, and just ignore the model in the mod config for now.

Edit: I have been making refinements to the prompting to fix issues that crop up as I play along, so the version attached to the post has been updated to V2 - it's still not perfect, but definitely a lot better, so I recommend upgrading if you are playing along. I have also changed the parameters slightly from above, and now use CFG of 1.5 and denoising strength of 0.6 which seems to work better overall, but feel free to experiment for best results (especially if using different models)
 
Last edited:

bowobble

New Member
Mar 18, 2021
10
13
Also, model switching from the GenAI mod config screen might not currently work (I commented out some stuff that was giving me issues, but didn't have time to debug) - I use the a1111 Web GUI to switch the model, and just ignore the model in the mod config for now.
That's a good idea. I like to keep my models sorted, so I put them in folders but GenAI either escapes or doesn't escape the slash, so stable diffusion doesn't recognize the path to the model. Makes switching models easier for me.

Just a tip, especially if you create a lot of images: Change settings in webui to save images as jpeg instead of png (WebUI: Settings -> Search -> File format for images), my GenAI folder is 400MB big and that's from banging like 5 chars.
I'm using different triggers, like more positions (anal piledriver, skull fuck etc.), cum stains, or changing the background to the current location. This causes an exponential increase in generated images. Not sure how many images your modification generate.
 
  • Like
Reactions: ycbelane

SMC85

Newbie
Apr 15, 2024
70
35
how do i open these in a way thats editable?
when i open with notepad the text just shows up as a bunch of "
¿ÿÑø;¯ÌÜÿí—Ë?àO²<e@?¸ ×wðuži±š" type symbols
There are two types of files baring the same name, .rpyc and . py files. .py files can be edited and then as the game starts will be transformed into the .rpyc files.

Edit:

I just read that there's a GenAI mod for different models? Where can I get that?
 
Last edited:

themagiman

Well-Known Member
Mar 3, 2018
1,765
554
I have been playing with the GenAI mod posted above, and tried to modify it to use img2img instead of text2img generation based on the original images in the game, which turned out to be relatively simple to do, and yielded pretty decent results - it's not perfect and required some prompt tweaking, but overall the quality of the graphics is improved IMO, while keeping a lot of the charm and character of the original. Below are some examples of original images, and what StableDiffusion was able to generate from them:

View attachment 4812057 View attachment 4812058 View attachment 4812059 View attachment 4812060 View attachment 4812063 View attachment 4812064 View attachment 4812065 View attachment 4812066 View attachment 4812110

I feed character name into the prompt, and its surprisingly consistent in the face it generates for each character, while also being able to generate diversity of types and faces. Using img2img also more closely follows the outfits (and poses as well) in the game, where the text2img tended to generate its own outfits based on the prompt, which were not always very similar, and kept changing with poses. Each approach of course has its advantages, well setup text2img can also be fun to play.

Also I have been using SDXL Lightning models to do this, which can generate usable results in as little as 4 steps, much faster than regular models. So far my best results were obtained with using the LCM sampler, 7 steps (4-8 works), 1.75 cfg (1-2 works), 0.5 denoising strength (can vary this number to stick closer to the original, or give SD more freedom to deviate) and 768x1152 resolution, and also using ADetailer for face improvement.

The Lightning models work really fast, I can generate an image in a few seconds, which makes this much more usable when playing the game. Regular SDXL models that require 20-30 steps take significantly longer, slowing down the flow. Some other models I had good luck with were , and

My focus also was mostly on realistic images - have not played much with more stylized models, anime, etc.

If anyone is interested in the modified GenAI mod files, I can post them. It's a bit hacked together right now (was just messing with it for personal use), but the changes are relatively simple - anyone who got the a1111 API to work in the game, and has any basic python knowledge could easily get this going as well.

It could probably be cleaned up and incorporated into the mod at some point (maybe the author of the mod can consider adding img2img as an option)
what do the nude models look like?
 

ycbelane

New Member
Jul 28, 2023
13
40
That's a good idea. I like to keep my models sorted, so I put them in folders but GenAI either escapes or doesn't escape the slash, so stable diffusion doesn't recognize the path to the model. Makes switching models easier for me.

Just a tip, especially if you create a lot of images: Change settings in webui to save images as jpeg instead of png (WebUI: Settings -> Search -> File format for images), my GenAI folder is 400MB big and that's from banging like 5 chars.
I'm using different triggers, like more positions (anal piledriver, skull fuck etc.), cum stains, or changing the background to the current location. This causes an exponential increase in generated images. Not sure how many images your modification generate.
Yes, I keep my models in subfolders as well (there are so many.... :) - so that was the issue I ran into (with the handling of the slash) - probably not hard to fix, but didn't seem important enough to spend time on yet, as I was already mainly testing different models in webui (copied a prompt from the game manually, and then tried different models/settings and prompt variations in there, to speed things up - only once I had a good working setup I moved it back into the game for actual playing)

Good advice on the jpg generation - I already keep autosaving off in webui, but switching the format to jpg there also affects the API results, and the GenAI mod is saving the generations to the "generated_images" folder, so it does save space (the jpg's are like 1/10th the size)

It's definitely a tradeoff in how dynamic and detailed the prompt is - the more dynamic and detailed, the more generations, which takes more time and space... my version so far doesn't add much more to the original, so it's not too crazy, but I have been experimenting with the prompting trying to find what is worth including and what isn't. One fundamental issue is how different models react to prompts differently, and doing too much microtuning for a specific model then all blows up when I change to different model. It would probably require some more sophisticated framework to allow handling this properly, where the prompt building could be customized for each model - but obviously that's not trivial, so for now I was just keeping the prompt simple and only addressing most glaring things.
 

Arguendo

New Member
Apr 22, 2023
9
7
Tried downloading and installing the latest Beta version (2025.07-betaVTMod4.0.31) via the Git auto function. I keep getting an exception error at start up every time. Have tried re-running the bat and remove rpyc, but no change.
Looks like there's an attribute missing for highlighting text in the menu somewhere. Know of a workaround for this?

Happens right after talking about meeting Stephanie at the lab.

You don't have permission to view the spoiler content. Log in or register now.
 

ibnarabi

Member
May 21, 2021
183
833
Tried downloading and installing the latest Beta version (2025.07-betaVTMod4.0.31) via the Git auto function. I keep getting an exception error at start up every time. Have tried re-running the bat and remove rpyc, but no change.
Looks like there's an attribute missing for highlighting text in the menu somewhere. Know of a workaround for this?

Happens right after talking about meeting Stephanie at the lab.

You don't have permission to view the spoiler content. Log in or register now.
You need to roll back before that 'highlight_green' commit, it breaks things.
 
Last edited:
  • Like
Reactions: Arguendo
4.60 star(s) 76 Votes