So, using the alpha channel was initially useless and futile?
Let me guess the process, you converted the video to a series of images, then you interpolate frames between them, then combine them back to video.
Or at least that's what the software you use is doing under the hood.
I have a previous reply to another interpolation that's have the color wrong.
The key is the video usually uses YUV color space, and there are different formulas to convert between YUV and RGB.
You can read the original reply if you want more detail.
https://f95zone.to/threads/yakin-acg-edit-collection-2024-03-24-yakin-acgedit.180567/post-15004273
https://f95zone.to/threads/yakin-acg-edit-collection-2024-03-24-yakin-acgedit.180567/post-15005612 (fuck me again, LOL)
(The exact reason, or in other word, the "conversion chain" that's happend here is different.)
Now the other problems:
1. Your final video is YUV420P8, encoded by NVENC.
- Since the AI model has to be fed with floating point RGB, the only place that the BGRA data gets used is encoding.
- There's no difference between you input BGRA and let the video card do the conversion, versus input already converted YUV to the video card. (the software conversion can even be potentially better)
- The software encoders, for example x265 for HEVC, usually have better quality.
- 10 bits (YUV420P10) is usually more recommended when encoding HEVC. Although, with the original video being YUV420P8, the difference is not much.
- There's no "valid" Alpha channel anywhere in the whole process.
2. You don't have to convert all frames into image in order to interpolate. (Use VapourSynth, I can give you advise on that if you like, and if I have time)