Collection Video Affect3D - miro Collection [2025-12-17] [Affect3D/affect3d_miro/affect3dx/miro]

4.00 star(s) 6 Votes

Evan Hunt

New Member
Apr 8, 2018
3
0
11
i have seen some AI clips, with a Model that has shows "Miro" on her black top, and tattoed over her lower ribs. I have not found it on this site. Anyone seen this too, or knows what its about?
1764599559934.png
 

xxxDreamZzz

Member
Oct 29, 2022
195
339
214
Hey, thanks xDreamZ.

Didn't know that side, and other artists there, that deserve acknowledgment imo.
You're welcome. There's a lot of backlash happening (still) because it's the early days (still) of using AI as a tool, rather than a pretty picture generator with obvious failures. But the tech is moving exceptionally fast and I'm positive within the next year you'll start to see more and more artists start to use AI in their animation workflows. NOT 100%... but as a tool. Some already are besides Miro, and the results are all looking so very good! Gooners Rejoice!
 

Mephiston87

New Member
Feb 26, 2022
1
2
13
People are struggling to understand that even an rtx3090 is considered an entry level GPU for creating ai video, currently its limit is around 8 seconds on 1080p. a 5090 will be around 12 seconds 1080p. Its entirely dependant on VRAM, it holds the hundreds/thousands of frames before it to maintain consistency. Smaller the VRAM less you can make and lower resolution. To put it simply its a monsterous cost, even a server full of 5090's would kick out a minute at most. VRAM usage is NON LINEAR, it compounds massively with duration.
 

xxxDreamZzz

Member
Oct 29, 2022
195
339
214
People are struggling to understand that even an rtx3090 is considered an entry level GPU for creating ai video, currently its limit is around 8 seconds on 1080p. a 5090 will be around 12 seconds 1080p. Its entirely dependant on VRAM, it holds the hundreds/thousands of frames before it to maintain consistency. Smaller the VRAM less you can make and lower resolution. To put it simply its a monsterous cost, even a server full of 5090's would kick out a minute at most. VRAM usage is NON LINEAR, it compounds massively with duration.
You're partially right, but mostly wrong. Currently 30 seconds is common. Wan2.2 or Z-image for start/end frames hold character consistency and shot selection. You plan your short video with a storyboard, do the shots, and then glue them together with traditional software like Davinci Resolve or Premier, same as 3D animation. You didn't think that 3D (or any video) was one long render, did you?
 
Aug 26, 2018
302
547
350
People are struggling to understand that even an rtx3090 is considered an entry level GPU for creating ai video, currently its limit is around 8 seconds on 1080p. a 5090 will be around 12 seconds 1080p. Its entirely dependant on VRAM, it holds the hundreds/thousands of frames before it to maintain consistency. Smaller the VRAM less you can make and lower resolution. To put it simply its a monsterous cost, even a server full of 5090's would kick out a minute at most. VRAM usage is NON LINEAR, it compounds massively with duration.
Its almost like making real art is better and more cost efficient or something.
 

DeviantFiend666

New Member
Aug 18, 2025
9
17
54
People are struggling to understand that even an rtx3090 is considered an entry level GPU for creating ai video, currently its limit is around 8 seconds on 1080p. a 5090 will be around 12 seconds 1080p. Its entirely dependant on VRAM, it holds the hundreds/thousands of frames before it to maintain consistency. Smaller the VRAM less you can make and lower resolution. To put it simply its a monsterous cost, even a server full of 5090's would kick out a minute at most. VRAM usage is NON LINEAR, it compounds massively with duration.
I believe I used a Wan2.1 workflow that loops the last frame of a generate video to be put as the first frame of the next video, that way it can hypothetically generate forever. There was an idea of a prompt traveling node added to each generation so it's done like 10 slices of one scene with the next video continuing from where the last one left off.

Never tried it myself, I've only ever managed to get Wan2.1 to run locally using ComfyUI with shared memory, with a lot of the memory need off-loaded to my RAM instead of VRAM. It took me about 20 minutes to generate a 680x520p video for 10 seconds. So I always generate image with PonyXL first, than generate video using that image.

Haven't tried using Wan2.2 at all though.
 
4.00 star(s) 6 Votes