DeviantFiend666
New Member
- Aug 18, 2025
- 8
- 11
- 3
I believe I used a Wan2.1 workflow that loops the last frame of a generate video to be put as the first frame of the next video, that way it can hypothetically generate forever. There was an idea of a prompt traveling node added to each generation so it's done like 10 slices of one scene with the next video continuing from where the last one left off.People are struggling to understand that even an rtx3090 is considered an entry level GPU for creating ai video, currently its limit is around 8 seconds on 1080p. a 5090 will be around 12 seconds 1080p. Its entirely dependant on VRAM, it holds the hundreds/thousands of frames before it to maintain consistency. Smaller the VRAM less you can make and lower resolution. To put it simply its a monsterous cost, even a server full of 5090's would kick out a minute at most. VRAM usage is NON LINEAR, it compounds massively with duration.
Never tried it myself, I've only ever managed to get Wan2.1 to run locally using ComfyUI with shared memory, with a lot of the memory need off-loaded to my RAM instead of VRAM. It took me about 20 minutes to generate a 680x520p video for 10 seconds. So I always generate image with PonyXL first, than generate video using that image.
Haven't tried using Wan2.2 at all though.