Now before you shout me down with your “Blender can do that!” patriotism, let me first explain how you would do it in Blender: So that works, sure, but if you’re looking for something really quick and easy that gives you a huge amount of optional control over the output file, give our friend FFmpeg a try! Just like ImageMagick, FFmpeg is an extremely useful command-line program. It has no interface, you simply type commands into a console. This might sound intimidating if you’ve never done it before, but it really is this easy: While I’m sure it’s not at powerful as Blender, it is really quick. There are some options detailed in the ffmpeg documentation such as the maximum extent of movement and contrast threshold, but for my purposes the defaults are fine. Like I mentioned above, Blender has a (relatively) limited number of encoding controls, whereas FFmpeg has as many as you could possibly imagine, and probably a few more. You just need to google something like “ffmpeg h264 encoding options” and you’ll quickly find several articles that detail things such as CRF factors, lossless compression and presets. Although Blender uses FFmpeg internally, you will need to install FFmpeg yourself and add it to the system path, which I’ve detailed in a separate post here: Installing FFmpeg for Dummies. I really, really recommend you explore the possibilities of this great tool, it’s useful for many more things, like converting an image sequence into a video, a video into an image sequence, rotate and scale videos, discover information about a video, stabilize that shaky video you took at you Great Aunt’s 4th wedding, stream the webcam you planted in your girlfriend’s ex-boyfriend’s bedroom, or convert your 250 frame cube render to a super-crispy lossless h264 of unparalleled awesomeness. If you’re interested in more possible uses of FFmpeg, take a look at the Use It section of my installation tutorial.
ffmpeg -i input.mov -vf deshake output.mov
ffmpeg
is the command that does all sorts of things to videos.
-i input.mov
tells ffmpeg which video to convert.
-vf deshake
is the stabilisation filter
output.mov
is of course the output file, keeping the original video unchanged.
Even so video stabilisation software sucks compared to what it could be.
You want to use the vidstabdetect and vidstabtransform filters from ffmpeg
http://ffmpeg.org/ffmpeg-filters.html#vidstabdetect-1
It’s far superior to the deshake filter
What i would like to see is a de_barrel->motion_detect->de_shake->horizon_correct->re-barrel toolchain.. sigh, perhaps when i get close to the end of my uni course..
the de-barrel is to avoid the warping you will get at the corners
it would be nice to have a compositor node to do this stuff, not sure how it would work..
There is a problem with vidstabdetect and vidstabtransform filter – if the video happens to zoom in – the image stabilization cause worse image stabilization.
as in wildly shaking left and right
Any support for opencl to do some of the computation? throwing a 4K video really slow it down (about 0.07fps)
How long is this process? For example a 5 min, 30fps, FullHD video.
Awesome post, thanks for sharing, I´m using Blender but there´s no comparison between the speed of achieving this stabilization using ffmpeg, I´m only missing a few variables for really shaking videos.
Hi, I am trying to use ffmpeg’s transform trf output to fix my drone’s drift in flight. Drone’s gyro/accelerometer only compensate axis change and accelerated movements. In a steady wind, the drone drifts steadily, and accelerometer thinks the drone is standing still – relative to the surrounding air column. I believe DJI and all the major commercial drone makers all use your way to extracting drift information, but they don’t publish their “secret”. And ffmpeg, being an open source project, allows any manufacturer to compile ffmpeg for any drone, and they are not obligated to disclose any commercial secret. We just need a project to disclose such information for the general public.
Can’t help you with this, but don’t they also use GPS to detect drift?
GPS doesn’t work indoors where the drift is the problem where we want to fix.
Works well, thank you!
Also really cool is combining ffmpeg with Automator on Mac. You can then right click on a video directly in your finder and then pass it to a Quick Action. In the Automator you make the Quick Action by simply defining the ffmpeg command above in a “Run Bash Script” workflow. Hey presto: immediately stabilized video directly from the finder window. This also works for a whole batch if you shift select and then run the Quick action 😎👍
Just remember to select “Pass as arguments” in Automator when making the workflow…
parentdir="$(dirname "$1")"
filename="$(basename "$1")"
/usr/local/bin/ffmpeg -y -hide_banner -i $1 -vf deshake "${parentdir}/stabilized_${filename}"
That’s the code. I guess we get the picture 😃
I have some great footage, shot at 1080 — from a boat with hand-held camera. Obviously, it shakes. I’m willing to cut things off down to 720 — in exchange for “perfect” stability. The format is MP4 (Sony’s camera), which means, it should be possible to crop each frame without recoding it (thus losing quality).
How would I do that?
You have to reencode when you stabilize. I haven’t seen any algorithm to date clever enough to avoid that.
The stablization I envision would crop each frame — losing the edges — and recenter. Still JPGs can be cropped without re-encoding — I was sure, the MP4-frames can be too…
JPG lossless crop takes advantage of JPGs old technology, mainly that the 8×8 chunks are independant. Higher tech compression has less independance, and MP4 (which isn’t a file type, it’s just a container, like a .zip. The files are H.264, H.265 etc files.) This higher tech compression generates frames on the fly with vector field frames that warp a keyframe many frames back to generate new frames. So we can’t just slice data off cleanly as in JPG crops. There could be inventable methods, in theory, to do a partial re-encode without the full render and fresh encode, where one keeps the keyframes and does a warp stabilization only on the vector field frames, but that’s still a lot of re-encoding. I don’t imagine it’s mathematically possible what you’re envisioning. Was a good thought though.
Ffmpeg supports 2-pass stabilization which is far better.
@ECHO OFF
CLS
MD Videos
for %%a in (videos*.*) do (
DEL transforms.trf
ffmpeg -i "%%a" -vf vidstabdetect -f null -
ffmpeg -i "%%a" -vf vidstabtransform=smoothing=50:crop=keep:invert=0:relative=0:zoom=0:optzoom=2:zoomspeed=0.2:interpol=bilinear:tripod=0 -map 0 -c:v libx264 -preset fast -crf 9 -c:a aac -b:a 192k "videos%%~na-DONE.mp4"
DEL transforms.trf
)
echo
echo
echo "------>>>>>>>>>> J O B I S N O W C O M P L E T E <<<<<<<<<<------"
echo
PAUSE
EXIT