Collection Video Reel Collection [2025-01-22] [Reel37891, Reel, リル]

hlarious

Member
Jun 4, 2022
123
430
137
Looks pretty good imo. There's still that weird morphing thing that AI loves to do but I only saw it affecting the male's left leg

Couple nitpicks: the colours are off, and the interpolated frames sometimes result in sharper edges than the original (not really noticeable when viewing though, so it probably doesn't matter)
 
  • Like
Reactions: matiouber

syzzlee

Member
Apr 1, 2020
413
1,514
306
Phoebe experimental interpolation method (more feedback = better)
syzzlee pls test it
Edit: thanks to hlarious for feedback
The point of the experiment was that BGRA pixel format is easier to process by the video card and has no color compression. This gives higher interpolation speed and in theory less loss of video quality during interpolation.
its already good if you dont compare to original
maybe try program other than ff if you want experiment
 
  • Like
Reactions: matiouber

orapro

Member
Jul 4, 2020
387
3,458
367
Phoebe experimental interpolation method (more feedback = better)
syzzlee pls test it
Edit: thanks to hlarious for feedback
The point of the experiment was that BGRA pixel format is easier to process by the video card and has no color compression. This gives higher interpolation speed and in theory less loss of video quality during interpolation.
This is incorrect.
The AI model uses floating point RGB (no A).

There're so many questions about your statement.
 

orapro

Member
Jul 4, 2020
387
3,458
367
So, using the alpha channel was initially useless and futile?
Let me guess the process, you converted the video to a series of images, then you interpolate frames between them, then combine them back to video.
Or at least that's what the software you use is doing under the hood.

I have a previous reply to another interpolation that's have the color wrong.
The key is the video usually uses YUV color space, and there are different formulas to convert between YUV and RGB.
You can read the original reply if you want more detail.
https://f95zone.to/threads/yakin-acg-edit-collection-2024-03-24-yakin-acgedit.180567/post-15004273
https://f95zone.to/threads/yakin-acg-edit-collection-2024-03-24-yakin-acgedit.180567/post-15005612 (fuck me again, LOL)
(The exact reason, or in other word, the "conversion chain" that's happend here is different.)


Now the other problems:

1. Your final video is YUV420P8, encoded by NVENC.
  • Since the AI model has to be fed with floating point RGB, the only place that the BGRA data gets used is encoding.
  • There's no difference between you input BGRA and let the video card do the conversion, versus input already converted YUV to the video card. (the software conversion can even be potentially better)
  • The software encoders, for example x265 for HEVC, usually have better quality.
  • 10 bits (YUV420P10) is usually more recommended when encoding HEVC. Although, with the original video being YUV420P8, the difference is not much.
  • There's no "valid" Alpha channel anywhere in the whole process.

2. You don't have to convert all frames into image in order to interpolate. (Use VapourSynth, I can give you advise on that if you like, and if I have time)
 
Last edited: