Started Invoke process with PID 3756
[2025-10-13 15:24:19,368]::[InvokeAI]::INFO --> PyTorch CUDA memory allocator: cudaMallocAsync
[2025-10-13 15:24:46,665]::[InvokeAI]::INFO --> Using torch device: NVIDIA GeForce GTX 1650
[2025-10-13 15:24:47,736]::[InvokeAI]::INFO --> cuDNN version: 90701
[2025-10-13 15:24:52,462]::[InvokeAI]::INFO --> Patchmatch initialized
[2025-10-13 15:24:53,640]::[InvokeAI]::INFO --> InvokeAI version 6.8.1
[2025-10-13 15:24:53,640]::[InvokeAI]::INFO --> Root directory = E:\AI
[2025-10-13 15:24:53,644]::[InvokeAI]::INFO --> Initializing database at E:\AI\databases\invokeai.db
[2025-10-13 15:24:53,746]::[ModelManagerService]::INFO --> [MODEL CACHE] Using user-defined RAM cache s
ize: 8.0 GB.
[2025-10-13 15:24:54,605]::[InvokeAI]::INFO --> Invoke running on
You must be registered to see the links
(Press CTRL+C t
o quit)
[2025-10-13 15:25:21,261]::[InvokeAI]::INFO --> Emptying model cache.
[2025-10-13 15:31:53,048]::[InvokeAI]::INFO --> Executing queue item 6, session 405de59e-5f29-4025-bbb8
-a9d0c78e1dde
Fetching 17 files: 100%|██████████████████████████████████████████████████████| 17/17 [00:00<?, ?it/s]
Loading pipeline components...: 100%|███████████████████████████████████| 7/7 [06:26<00:00, 55.18s/it]
[2025-10-13 15:38:39,547]::[InvokeAI]::WARNING --> Loading 0.146484375 MB into VRAM, but only -798.25 M
B were requested. This is the minimum set of weights in VRAM required to run the model.
[2025-10-13 15:38:39,976]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:text_encoder' (CLIPTextModel) onto cuda device in 0.69s. Total model size: 469.44M
B, VRAM: 0.15MB (0.0%)
[2025-10-13 15:38:40,383]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:tokenizer' (CLIPTokenizer) onto cuda device in 0.01s. Total model size: 0.00MB, VR
AM: 0.00MB (0.0%)
[2025-10-13 15:38:53,995]::[InvokeAI]::WARNING --> Loading 0.634765625 MB into VRAM, but only -860.25 M
B were requested. This is the minimum set of weights in VRAM required to run the model.
[2025-10-13 15:38:54,021]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:text_encoder_2' (CLIPTextModelWithProjection) onto cuda device in 0.05s. Total mod
el size: 2649.92MB, VRAM: 0.63MB (0.0%)
[2025-10-13 15:38:54,030]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:tokenizer_2' (CLIPTokenizer) onto cuda device in 0.01s. Total model size: 0.00MB,
VRAM: 0.00MB (0.0%)
[2025-10-13 15:39:08,760]::[InvokeAI]::WARNING --> Loading 0.146484375 MB into VRAM, but only -860.25 M
B were requested. This is the minimum set of weights in VRAM required to run the model.
[2025-10-13 15:39:08,768]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:text_encoder' (CLIPTextModel) onto cuda device in 0.03s. Total model size: 469.44M
B, VRAM: 0.15MB (0.0%)
[2025-10-13 15:39:08,773]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VR
AM: 0.00MB (0.0%)
[2025-10-13 15:39:09,141]::[InvokeAI]::WARNING --> Loading 0.634765625 MB into VRAM, but only -860.25 M
B were requested. This is the minimum set of weights in VRAM required to run the model.
[2025-10-13 15:39:09,159]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:text_encoder_2' (CLIPTextModelWithProjection) onto cuda device in 0.04s. Total mod
el size: 2649.92MB, VRAM: 0.63MB (0.0%)
[2025-10-13 15:39:09,167]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:tokenizer_2' (CLIPTokenizer) onto cuda device in 0.01s. Total model size: 0.00MB,
VRAM: 0.00MB (0.0%)
Fetching 17 files: 100%|███████████████████████████████████████████| 17/17 [00:00<00:00, 26516.61it/s]
Loading pipeline components...: 100%|███████████████████████████████████| 7/7 [07:00<00:00, 60.07s/it]
[2025-10-13 15:46:46,903]::[InvokeAI]::WARNING --> Loading 1.904296875 MB into VRAM, but only -860.25 M
B were requested. This is the minimum set of weights in VRAM required to run the model.
[2025-10-13 15:46:47,401]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:unet' (UNet2DConditionModel) onto cuda device in 0.69s. Total model size: 9794.10M
B, VRAM: 1.90MB (0.0%)
Fetching 17 files: 100%|████████████████████████████████████████████| 17/17 [00:00<00:00, 6716.58it/s]
Loading pipeline components...: 100%|██████████████████████████████████| 7/7 [31:14<00:00, 267.79s/it]
[2025-10-13 16:18:49,050]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:scheduler' (EulerDiscreteScheduler) onto cuda device in 0.16s. Total model size: 0
.00MB, VRAM: 0.00MB (0.0%)
100%|█████████████████████████████████████████████████████████████████| 30/30 [08:39<00:00, 17.32s/it]
Fetching 17 files: 100%|██████████████████████████████████████████████████████| 17/17 [00:00<?, ?it/s]
Loading pipeline components...: 100%|███████████████████████████████████| 7/7 [08:38<00:00, 74.09s/it]
estimate_vae_working_memory_sd15_sdxl: 9489612800
[2025-10-13 16:37:07,551]::[InvokeAI]::WARNING --> Loading 0.0 MB into VRAM, but only -5814.25 MB were
requested. This is the minimum set of weights in VRAM required to run the model.
[2025-10-13 16:37:07,558]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '58a740e8-df7b-41
c4-9729-3d4c07950e39:vae' (AutoencoderKL) onto cuda device in 1.53s. Total model size: 319.11MB, VRAM:
0.00MB (0.0%)
[2025-10-13 16:37:23,653]::[InvokeAI]::INFO --> Graph stats: 405de59e-5f29-4025-bbb8-a9d0c78e1dde
Node Calls Seconds VRAM Used
sdxl_model_loader 1 0.048s 0.000G
sdxl_compel_prompt 2 438.360s 0.245G
collect 2 0.094s 0.009G
string 1 0.001s 0.009G
integer 1 0.002s 0.009G
core_metadata 1 0.166s 0.009G
noise 1 0.220s 0.009G
denoise_latents 1 2909.754s 0.678G
l2i 1 580.524s 3.008G
img_resize 1 0.317s 0.008G
TOTAL GRAPH EXECUTION TIME: 3929.486s
TOTAL GRAPH WALL TIME: 3929.800s
RAM used by InvokeAI process: 0.05G (-0.297G)
RAM used to load models: 12.92G
VRAM in use: 0.008G
RAM cache statistics:
Model cache hits: 11
Model cache misses: 4
Models cached: 6
Models cleared from cache: 1
Cache high water mark: 9.56/0.00G