There is currently an version on the discord by The Fool (Tony Siu) that can be manually adapted to different workflows basically in SDclient_ren.py one has to adapt the following lines to the node ids corresponding in your custom workflow and then the any custom workflow can be used.
IMAGE_TO_B64_NODE_ID = "59"
workflow['128']['inputs']['base64_data'] = base64_image
workflow['6']['inputs']['text'] = positive_prompt
workflow['7']['inputs']['text'] = negative_prompt
workflow['36']['inputs']['seed'] = seed
The attached.rar comes with a custom workflow as example. It has a few custom nodes and should work right after installing those custom extensions
Just for your information that version is not from The Fool (Tony Siu), but from me. I recognize my comments (I am from Belgium and I am a French speaker). I just posted yesterday my GenAI version and my workflow yesterday.
workflow: dict = load_workflow_template()
# 0: CheckpointLoaderSimple (Modèle SDXL)
workflow['14']['inputs']['ckpt_name'] = sd_get_setting('sd_selected_model', 'mklanANIMEHentai_illusthentai233DMD2.safetensors')
# 1: ETN_LoadImageBase64 (Image Source B64)
workflow['128']['inputs']['base64_data'] = base64_image
# 2: LoraTagLoader (Prompt Positif - Point d'entrée pour les balises <lora:..>)
workflow['6']['inputs']['text'] = positive_prompt
# 3: CLIPTextEncode (Prompt Négatif - Confirmation du Contrat)
workflow['7']['inputs']['text'] = negative_prompt
# 4: KSampler (Paramètres de génération)
# Utiliser la seed sauvegardée ou générer une nouvelle si non définie
workflow['36']['inputs']['seed'] = seed
I recognize my logarithmic approach of a waiting loop:
def adaptive_waits(start=2.0, shrink_factor=0.75, floor=0.025, timeout=120):
start_time = time.time()
delay = start
shrink_ratio = 1.0
shrink_factor_not_changed = True
while time.time() - start_time < timeout:
yield delay
shrink_ratio *= shrink_factor
delay = max(delay * shrink_ratio, floor)
if delay < 0.5 and shrink_factor_not_changed:
shrink_factor_not_changed = False
shrink_ratio = 0.9
shrink_factor = 0.95
start_time = time.time()
for delay in adaptive_waits(timeout = sd_get_setting('sd_timeout_value', 120)):
# print(f"Polling after {delay:.3f}s...")
time.sleep(delay) # delai régressif pour éviter de spammer Comfy inutilement
history_url = f"{COMFYUI_API_BASE}:{self.port}/history/{prompt_id}"
history_response = requests.get(history_url, timeout=10)
if history_response.status_code == 200:
history_data = history_response.json()
job_data = history_data.get(prompt_id)
# Le job est terminé s'il apparaît dans l'historique
if job_data:
print(f"Polling time: {time.time() - start_time:.3f}s")
outputs = job_data.get("outputs", {})
node_data = outputs.get(IMAGE_TO_B64_NODE_ID, {})
#print(f">>> output: {node_data}")
# 6. Extraction de l'image Base64 (Node 5)
if "text" in node_data:
print(f"ComfyUI Job {prompt_id} terminé avec succès.")
return node_data["text"][0]
print(f"error: Job ComfyUI {prompt_id} terminé, mais la sortie B64 (Node {IMAGE_TO_B64_NODE_ID}) est introuvable. Vérifiez votre workflow.")
print(f"error: Job_data ComfyUI {job_data}")
self.show_connection_error()
self.sd_generation_in_progress = False
return None
My version also include a new color helper that use CIE Lab and HSL instead of Euclidean distance between 2 colors. CIE Lab use the notion of perception of colors.
Here is my (original version) with my workflow. On my RTX4090 it takes 6 seconds (the first call take 14 seconds)