Hello, friend!

How I post-processed iOS screen recording video with ffmpeg

In this post I document how I created and post-processed the video you can see below. I used ffmpeg for the post-processing.

To record the video I used the standard built-in feature of iOS to create screen recordings. I needed a bit of configuration to add the recording button to the iOS Control Center. I followed this documentation from Apple.

After video was recorded, it contained parts that I didn’t want to include in the final video, including a silent audio track.

NOTE: I run the post-processing on an Apple M3 laptop running macOS 15.3 with ffmpeg version 7.1 installed with brew package manager. All ffmpeg commands should work on other operating systems too, but expect differences. Let me know if you try and something doesn’t work…

Post-processing

To remove the audio track:

ffmpeg -i raw.mp4 -c copy -an ebonito-wo-sound.mp4

When I recorded video I captured some things that I did not want to include in the video, specifically at the start and at the end of the video there were things not relevant to the topic I wanted to present.

To cut out beginning and the end of the video:

ffmpeg -i ebonito-wo-sound.mp4 -ss 19 -to 39 -an ebonito-wo-sound-cut.mp4

I wanted to put a frame of an iPhone around the video. For that I used a PNG file that comes with the Apple Frames Shortcut project. This project helps adding frames to images, but not to videos unfortunately. Specifically, I took the iPhone 12-13 mini Portrait.png that matches the screen dimensions of my iPhone 13 mini.

Here’s the trickiest ffmpeg command that I had to construct from information I found online (but unfortunately forgot to bookmark). It adds transparent padding to the video to make space for the frame that will go around the video. The frame is 60px wide on each side, so I’m adding 120px to each dimension and adding white padding of 60px from left and from the top:

ffmpeg -i ebonito-wo-sound-cut.mp4 -filter "pad=iw+120:ih+120:60:60:white" ebonito-wo-sound-cut-padded.mp4

And now to overlay the iPhone frame on top of the video:

ffmpeg -i ebonito-wo-sound-cut-padded.mp4 -i iphone.png -filter_complex "[0][1]overlay" ebonito-framed.mp4

As I mentioned earlier, I recorded the video on an iPhone 13 mini, which has relatively small screen dimensions by the standards of 2025, but still it’s 886 x 1920 pixels, which I think is too much for a video to post on a website. To scale the video proportionally to be 500 pixels wide I ran this command:

ffmpeg -i ebonito-framed.mp4 -vf scale=500:-2 ebonito-open-prices.mp4

Final words

Since ffmpeg is so powerful and versatile, I bet there’s another and possibly simpler way how to achieve this. Let me know if you know a better way…

Initially I was planning to explain each ffmpeg command that I was running and its parameters. But then I realized it would take too much time. Plus I won’t be able to explain it any better than the official docs of the amazing ffmpeg.

Final final words

After doing all this manually, I found a nice open source project on GitHub that does all this automatically. It’s called ScreenFramer. Even thought the output is not as precise as from my manual post-processing, I probably would have achieved the same with ScreenFramer if I knew about it earlier. Then again, I would miss out on learning more about ffmpeg

#Ebonito #Ffmpeg