Hi i’m currently using a pretty primitive way to cut and get the videos. Can you tell me how I can drop the audio files?
FFmpeg is my best friend when doing just about any video stuff, and it can be yours too. It can even do the
You must be registered to see the links
for you if you want to get super deep into it. If you're on linux (or install Ubuntu 20.04 on
You must be registered to see the links
from the
You must be registered to see the links
or
You must be registered to see the links
) you can use a script (I called it encode.sh and wrote it while calculating that space savings number) containing this:
Bash:
#!/bin/bash
set -Eeuxo pipefail
rm -f "${1}_vp9.webm" ${1}_vp9_passlog*
ffmpeg -i "${1}" \
-vf scale=-1:720 -an \
-c:v libvpx-vp9 -crf 32 -b:v 0 -pass 1 -deadline good -cpu-used 4 \
-passlogfile "${1}_vp9_passlog" \
-f null /dev/null
ffmpeg -i "${1}" \
-vf scale=-1:720 -an \
-c:v libvpx-vp9 -crf 32 -b:v 0 -pass 2 -deadline good -cpu-used 2 \
-passlogfile "${1}_vp9_passlog" \
"${1}_vp9.webm"
rm -f ${1}_vp9_passlog*
And then you simply run
./encode ./source.mp4
(assuming you're in the folder with both encode.sh and the file) which will run a VP9 2-pass constant quality encode of whatever file you specify and spit out "source.mp4_vp9.webm".
To break down the ffmpeg command real quick:
-i "${1}"
- This takes the first argument to the bash script and uses it as the input file for ffmpeg.
-vf scale=-1:720
- runs the input through a video filter which scales to (something) by 720 pixels, where something is whatever number makes sense to maintain aspect ratio.
-c:v libvpx-vp9
- This specifies the codec to use for the video, and chooses to use libvpx-vp9, which AFAIK is the only vp9 encoder. I specifically recommend VP9 because it:
-crf 32
- This specifies the constant rate factor of 32, this is basically your quality slider with a scale of 0-63: lower numbers give higher qualities but bigger file sizes.
You must be registered to see the links
video by google (who developed the codec and the libvpx encoder). You could probably even go with a slightly higher number to eek out slightly smaller videos since your inputs seem not super high quality to start with. YMMV.
-b:v 0
- This specifies to the encoder the bitrate for video, which since we're doing a constant quality encode we set to 0, we want whatever bitrate is needed for the quality we want. This can be set to some number which switches to a constrained quality encode that will never go above the specified bitrate, but that is mostly useful for video streaming applications where you know e.g. your consumer has a 5mbps connection that you need to fit within.
-pass 1
and -pass 2
- This specifies which pass of the encoding we are on. On the 1st pass the video is analyzed and useful data for encoding is stored in a log file. On the 2nd pass the video is actually encoded.
-deadline good
- This basically controls how quickly we think we need to do the encoding (its deadline), which can be realtime (very very fast but bad size/quality), good, or best (essentially a placebo). Google says
You must be registered to see the links
, so we use good (note that "-quality" is just an older way of referring to "-deadline").
-cpu-used 4
and -cpu-used 2
- When deadline=best this controls the number of cpu cycles used, which basically means speed. Generally the slower you go the smaller the file and higher perceptual quality you will get in the output. We use speed 4 for the first pass as this will have no impact on the first pass and may make it faster. We use speed 2 for the 2nd pass because it is, again,
You must be registered to see the links
(note that "-speed" is just an older way of referring to "-cpu-used").
-passlogfile "${1}_vp9_passlog"
- Where to store/find the 1st pass's log (statistics for encoding in the 2nd pass) file. I specifically chose to specify this as things can get weird if multiple encodes are running at the same time, and I wanted to be able to remove it after the encode, so I named it based on the input file (${1}) with some other stuff attached to make it clear what that file was (_vp9_passlog).
-f null /dev/null
- This is used in the 1st pass to specify that ffmpeg should use the format named null and send it to the file /dev/null (which is a linux thing that entirely discards anything sent there), since it doesn't produce any video output. We need to specify the format since ffmpeg won't be able to guess what format we're using based on the output filename of "/dev/null", and choosing other formats could complain for various reasons or end up producing (garbage) data and wasting time trying to actually do something with it.
"${1}_vp9.webm"
- This is used in the 2nd pass to tell ffmpeg to output the file right next to the input (${1}) but with some extra stuff (_vp9.webm) added on to the end. Since this ends in ".webm" ffmpeg is able to determine that you want the output to be put into the "WebM" container format (basically a subset of matroska/mkv, but standardized for the web), which is the standard container for vp8/vp9 video with (optional) vorbis/opus audio, and the only container for vp9 readable by standard web browsers.
Thank you for coming to my TED talk. I should write a wiki on this shit or something...