Create and Fuck your AI Slut -70% OFF
x

Mod Lab Rats 2 Reformulate GenAI Mod Thread

x760917

New Member
Jul 19, 2020
3
4
79
hold up how'd you cut out the background? i get a big ass white background and nothing else for whatever reason
While I'd love to help you, I'm a Chinese user. I don't understand English, but I hope you understand.
GenAI Config > Remove Background: On
This will remove the background.

I have the abg_extension installed, but I'm not sure if that's what's removing the background.
I'm using the Chinese version of the AI Builder (Lazy Pack), so I'm not sure if you understand it.
Huishi 绘世
The base of this software is sd-webui-aki-v4.10

I'm not sure what background removal plugins exist, so please research them on Google.

I only know of these two:

stable-diffusion-webui-rembg
abg_extension

The above is translated using Google Translate.

If you'd like, add me as a friend on Discord and I can try to help you.
ID: x760917
 
Last edited:

gimb

Newbie
Feb 17, 2018
16
8
179
I can't seem to get Appends to work with this at all. Like, I add in additional positive or negative prompts and nothing changes. I've even tried deleting the images and rerolling the seed and still nothing
yeah is somehow messed up by accidentaly removing appends here is my current version
 
Last edited:

Yllarius

Newbie
Dec 13, 2017
58
128
143
yeah is somehow messed up by accidentaly removing appends here is my current version
appreciate it, it might've been user error as well. I realize now that the CFG is set to 1, so it might've also just been ignoring my prompts.

I got it to work by adding in a decent weighting, so now i'm just playing around with things as I havn't worked much with GenAI. For example, if I pass it an attempt to get something like armpit hair, it creates weirdness where they're always posed with their arms above their head facing the camera. Which is really jacked when they're posed backwards and snapping their spines in a 180 twist lmao.


I'm curious how deep you can go with the JSON files. For example, can I create character specific prompts? I imagine it wouldn't work for non-unique characters however.

Also, can you add prompts specifically for when someone is nude? appends are fine, but annoying because if you append a context-sensitive phrase, you not only have to remove it and add it back in when appropriate, but also it causes every pose to appear as 'new' again.
 

idfkru

Newbie
Nov 6, 2024
41
38
86
why not upload full and completed lab rate 2 tested AI version?
a couple reasons:
1. there is no completed version of the mod out there, rather a handful of ones being actively developed
2. this mod requires stable diffusion to be set up on your PC, it's not just another file that goes in the mods folder. With that also comes model selection: they're 6GB+ each, and people's preferences vary widely across the different ones available. No point in forcing people to download a model they're just going to delete right away
 
  • Like
Reactions: Magister Crudi

Arasutaru

Member
Feb 5, 2019
132
81
232
i wonder aside from FC anyone know any other games that make use of AI like this? i like FC but never liked that it doesnt allow us to generate scene images which this one kinda does tho i do feel like it uses the wrong scenes or prompts for certain scenes things preventing us from making better prompts for scenes because it could conflict with other scenarios (i.e paizuri, blowjob, kiss and hug hug being kiss and paizuri being blowjob prompts i made a prompt in scripts for blowjobs and during a paizuri scene realized it was just blowjob prompt honestly if theres a way to add more prompts i could MASSIVELY improve all of my img gen for all scenarios but not sure how to code it in plus im still messing witht he prompts to get it to work as best as i can cause of that
)
 
Last edited:

zalamander

Newbie
Oct 2, 2017
20
16
178
why not upload full and completed lab rate 2 tested AI version?
Also, there are different "flavours" of Stable Difussion, depending of your use case, and hardware you have.

- For AMD Radeon 6800/6900 and up, there's Automatic1111, who was recently updated. (Now requires Python 3.11, but it can be installed under python 3.12.10 for some nice speedups, but also comes with some caveats).
- For nvidia 3xxx and UP, SDNext is the better supported solution, but some features won't work on this. Also, requires some previous setup.
- For the tortured souls that whants to live in the bleeding edge, there's SDForge. Nvidia users will have some quite nice speedups, at the cost of some ocassional black images, and AMD users should stay away from it. (Too Slow).

In doubt, use Automatic1111.
 
Last edited:

gimb

Newbie
Feb 17, 2018
16
8
179
i just switched from reforge to comfy ui to compare again and yeah lol just gonna wait for comfyui support
There is currently an version on the discord by The Fool (Tony Siu) that can be manually adapted to different workflows basically in SDclient_ren.py one has to adapt the following lines to the node ids corresponding in your custom workflow and then the any custom workflow can be used.

IMAGE_TO_B64_NODE_ID = "59"

workflow['128']['inputs']['base64_data'] = base64_image
workflow['6']['inputs']['text'] = positive_prompt
workflow['7']['inputs']['text'] = negative_prompt
workflow['36']['inputs']['seed'] = seed

The attached.rar comes with a custom workflow as example. It has a few custom nodes and should work right after installing those custom extensions
 

mcmania

Member
Dec 4, 2016
126
906
354
There is currently an version on the discord by The Fool (Tony Siu) that can be manually adapted to different workflows basically in SDclient_ren.py one has to adapt the following lines to the node ids corresponding in your custom workflow and then the any custom workflow can be used.

IMAGE_TO_B64_NODE_ID = "59"

workflow['128']['inputs']['base64_data'] = base64_image
workflow['6']['inputs']['text'] = positive_prompt
workflow['7']['inputs']['text'] = negative_prompt
workflow['36']['inputs']['seed'] = seed

The attached.rar comes with a custom workflow as example. It has a few custom nodes and should work right after installing those custom extensions
Just for your information that version is not from The Fool (Tony Siu), but from me. I recognize my comments (I am from Belgium and I am a French speaker). I just posted yesterday my GenAI version and my workflow yesterday.

workflow: dict = load_workflow_template()
# 0: CheckpointLoaderSimple (Modèle SDXL)
workflow['14']['inputs']['ckpt_name'] = sd_get_setting('sd_selected_model', 'mklanANIMEHentai_illusthentai233DMD2.safetensors')
# 1: ETN_LoadImageBase64 (Image Source B64)
workflow['128']['inputs']['base64_data'] = base64_image
# 2: LoraTagLoader (Prompt Positif - Point d'entrée pour les balises <lora:..>)
workflow['6']['inputs']['text'] = positive_prompt
# 3: CLIPTextEncode (Prompt Négatif - Confirmation du Contrat)
workflow['7']['inputs']['text'] = negative_prompt
# 4: KSampler (Paramètres de génération)
# Utiliser la seed sauvegardée ou générer une nouvelle si non définie
workflow['36']['inputs']['seed'] = seed


I recognize my logarithmic approach of a waiting loop:


def adaptive_waits(start=2.0, shrink_factor=0.75, floor=0.025, timeout=120):
start_time = time.time()
delay = start
shrink_ratio = 1.0
shrink_factor_not_changed = True
while time.time() - start_time < timeout:
yield delay
shrink_ratio *= shrink_factor
delay = max(delay * shrink_ratio, floor)
if delay < 0.5 and shrink_factor_not_changed:
shrink_factor_not_changed = False
shrink_ratio = 0.9
shrink_factor = 0.95

start_time = time.time()
for delay in adaptive_waits(timeout = sd_get_setting('sd_timeout_value', 120)):
# print(f"Polling after {delay:.3f}s...")
time.sleep(delay) # delai régressif pour éviter de spammer Comfy inutilement

history_url = f"{COMFYUI_API_BASE}:{self.port}/history/{prompt_id}"
history_response = requests.get(history_url, timeout=10)

if history_response.status_code == 200:
history_data = history_response.json()
job_data = history_data.get(prompt_id)
# Le job est terminé s'il apparaît dans l'historique
if job_data:
print(f"Polling time: {time.time() - start_time:.3f}s")

outputs = job_data.get("outputs", {})
node_data = outputs.get(IMAGE_TO_B64_NODE_ID, {})
#print(f">>> output: {node_data}")

# 6. Extraction de l'image Base64 (Node 5)
if "text" in node_data:
print(f"ComfyUI Job {prompt_id} terminé avec succès.")
return node_data["text"][0]

print(f"error: Job ComfyUI {prompt_id} terminé, mais la sortie B64 (Node {IMAGE_TO_B64_NODE_ID}) est introuvable. Vérifiez votre workflow.")
print(f"error: Job_data ComfyUI {job_data}")
self.show_connection_error()
self.sd_generation_in_progress = False
return None



My version also include a new color helper that use CIE Lab and HSL instead of Euclidean distance between 2 colors. CIE Lab use the notion of perception of colors.

Here is my (original version) with my workflow. On my RTX4090 it takes 6 seconds (the first call take 14 seconds)
 
Last edited:
  • Like
Reactions: falco256