This is very cool. I built one of these myself around Christmas; Claude Code can put one together in just a couple prompts (this is also how I worked out how to have Claude test TUIs with tmux). What was striking about my finished product --- which is much less slick than this --- was how much of the heavy lifting was just working out which arguments to pass to ffmpeg.
It's surprisingly handy to have something like this hanging around; I just use mine to fix up screen caps.
Commenting mostly because when I did this I thought I was doing something very silly, and I'm glad I'm not completely crazy.
You can use AI to figure out the arguments to ffmpeg. But indeed it seems like there's just a single call to FFmpeg CLI to power the whole thing which is amazing.
I'd use ffmpeg to downscale the frames to the terminal size too. There are also various filters that could help quantizing the colors to what your terminal supports. The paletteuse filter will get you free dithering too.
yeah I remember learning this trick in like 2007 with libaa and later caca for color.
It looks like this app is shelling out to ffmpeg to get the bitmap of a frame and then shelling to something called chafa to covert to nice terminal-friendly video.
It's interesting how terminal apps are increasing in popularity after decades of desktop and web apps. I wonder if it's the talk to the chat AI that's making people more used to asking a prompt screen or if it's the simplicity and lack of bloat.
being true cross-platform is a bigger draw for me. once something works on one platform it will usually work on any other platform that has a terminal.
im in the process of switching to neovim as my main editor just so i can have the same setup everywhere. IDEs like vscode are 'cross platform' but only work on desktop, and there are IDE-like editors for android but none of them work on desktop. oddly enough neovim on android/termux is actually easier to use than any of the IDE editor apps mainly due to everything being keyboard based
when it comes to writing my own mini programs/scripts, is basically the promise of things like flutter where you can write something once and run it everywhere, only it takes hours instead of days to throw something together and its not as overkill because im just using python or bash and then fzf or textual for any interactive parts
I think it is part of a more general problem, I don't think anybody intends to make a terrible DSL it is just a natural progression from.
1.we have a command line program
2.command line args are traditionally parsed by getopt(or close relative) so we will use that(it's expected)
3.our command line program has grown tremendously in complexity and our args are now effectively a domain specific language.
4.congratulations, we are now shipping a language using a woefully inadequate parsing engine with some of the worst syntax in existence.
see also: ip-tables, find
I think it would behoove many of these programs to take a good look at what they are doing when they reach step 3 and invest in a real syntax and parser. It is fine to keep a command line interface, but you don't have to use getopt.
I don't find trimming videos with ffmpeg particularly difficult, is just-ss xx -to xx -c copy basically. Sure, you need to get those time stamps using a media player, but you probably already have one so that isn't really an issue.
What I've found to be trickier is dividing a video into multiple clips, where one clip can start at the end of another, but not necessarily.
Missed opportunity to reference the famous Dropbox hn comment.
I just think there are other closely related use cases where a separate program can add more value, especially in the terminal. I wouldn't suggest most people should use ffmpeg instead of a gui, those are too dissimilar. Another example is cutting out a part of a video, with ffmpeg you need to make two temporary videos and then concatenate them, that process would greatly benefit from a better ux.
Point of order: the Dropbox HN comment is famously misconstrued. People think it was about Dropbox; it was about the Dropbox YC application, and was both well-intentioned and constructive.
# make a 6 second long video that alternates from green to red every second.
ffmpeg -f lavfi -i "color=red[a];color=green[b];[a][b]overlay='mod(floor(t)\,2)*w'" -t 6 master.mp4; # creates 150 frames @ 25fps.
# try make a 1 second clip starting at 0sec. it should be all green.
ffmpeg -ss 0 -i "master.mp4" -t 1 -c copy "clip1.mp4"; # exports 27 frames. you see some red.
ffmpeg -ss 0 -t 1 -i "master.mp4" -c copy "clip2.mp4"; # exports 27 frames. you see some red.
ffmpeg -ss 0 -to 1 -i "master.mp4" -c copy "clip3.mp4"; # exports 27 frames. you see some red.
# -t and -to stop after the limit, so subtract a frame. but that leaves 26...
# so perhaps offset the start time so that frame#0 is at 0.04 (ie, list starts at 1)?
ffmpeg -itsoffset 0.04 -ss 0 -i "master.mp4" -t 0.96 -c copy "clip4.mp4"; # exports 25 frames, all green, time = 1.00. success.
# try make another 1 second clip starting at 2sec. it should be all green.
ffmpeg -itsoffset 0.04 -ss 2 -i "master.mp4" -t 0.96 -c copy "clip5.mp4"; # exports 75 frames, time = 1.08, and you see red-green-red.
# maybe don't offset the start, and drop 2 at the end?
ffmpeg -ss 2 -i "master.mp4" -t 0.92 -c copy "clip6.mp4"; # exports 75 frames, time = 1.08, and you see green-red.
ffmpeg -ss 2 -t 0.92 -i "master.mp4" -c copy "clip7.mp4"; # exports 75 frames, time = 0.92, and you see green-red.
# try something different...
ffmpeg -ss 2 -i "master.mp4" -c copy -frames 25 "clip8.mp4"; # video is broken.
ffmpeg -ss 2 -i "master.mp4" -c copy -frames 25 -avoid_negative_ts make_zero "clip9.mp4"; # exports 25 frames, all green, time = 1.00. success?
# try export a red video the same way.
ffmpeg -ss 3 -i "master.mp4" -c copy -frames 25 -avoid_negative_ts make_zero "clip10.mp4"; # oh no, it's all green!
I've never tried doing frame perfect clips like that, that does sound annoying. But from a cursory read of the source, I don't think this program will solve that issue either? Because the time stamps in your examples are all correct, and the TUI is using ffmpeg with -ss and -t as well.
I think the best way of getting frame accurate clips like that is putting the starting time after the input (or rather before the output), which decodes the video up to that time, and reencode it instead of copying. Both of these commands gives the expected output:
Yer, I noticed that this tool was just doing `-ss -i -t` from its demo gif, which is what prompted me to reply. I'm sure people will discover that all sorts of problems will manifest if they don't start a lossless clip on a keyframe. One such scenario is when you make a clip that plays perfect on your PC, but then you send it someone over FB Messenger, and all of a sudden there's a few seconds of extra video at the start!
Can't make frame perfect cuts without re-encoding, unless your cut points just so happen to be keyframe aligned.
There are incantations that can dump for you metadata about the individual packets a given video stream is made up of, ordered by timecode. That way you can sanity check things.
This is terribly frustrating. The paths of least resistance either lead to improper cuts or wasteful re-encoding. Re-encoding just until the nearest keyframe I'm sure is also possible, but yeah, this does suck, and the tool above doesn't seem to make this any more accessible either according to the sibling comment.
> Re-encoding just until the nearest keyframe I'm sure is also possible
Yer, I've done that, and it's a pain to do "manually" (ie, without having a script ready to do it for you). I've also manually sliced the bitstream to re-insert the keyframe, which if applied to my clip5.mp4 example, could potentially reduce the 50* negative ts frames to maybe 2 or 3. It would be easier if there were tools that could "unpack" and "repack" the frames within the bitstream, and allow you to modify "pointers"/etc in the process - but I don't know of any such thing.
For frame perfect cuts you need to re-encode. You can use lossless H264 encoding for intermediary cuts before the final one so that you don't unnecessarily degrade quality.
I wonder if there is a solution which would just copy the pieces in between the starting and ending points while only re-encoding the first and last piece as required.
I've been trying to cut precise clips from a long mp4 video over the past week or so and learned a lot. I started with ffmpeg on the command line but between getting accurate timestamps and keyframe/encoding issues it is not trivial. For my needs I want a very precise starting frame and best results came from first reencoding at much higher quality, then marking & batching with LossLessCut, then down coding my clips to desired quality. Even then there's still some manual review and touch-up.
It's not crazy-hard, but by no means trivial or simple.
I used a plugin in mpv to do it but I can't find it anymore. You just pressed a key to mark the start and end. And with . and , you could do it at keyframe resolution not just seconds.
Appreciate you mentioning the MPV route for making clips, I might actually go through and process all the game recordings I saved for clips over the years.
Could have really used this a couple days ago. I had to record a video an assignment, but due to lack of global hotkeys on OBS with wayland, I had to start and stop the video on the OBS GUI. I tried to figure out ffmpeg but I was too tired and it was getting close to the deadline so I spent some time learning how to to do it with kdenlive.
If you dont like leaving your main video player, IINA on mac is scriptable, so I just use shortcut keys to send start/end indicators to a script which runs ffmpeg on the timestamps.
Im sure other video players like VLC support this, but I found VLC's apis very lacking.
The intermediary solution for me between ffmpeg and kdenlive is LosslessCut (https://github.com/mifi/lossless-cut). Also free and open-source... of course it look less cool than a Terminal UI like the OP, but it's very practical when I don't want to reencode everything, or if I just need to change the format of container (MP4, MKV, etc.).
You are welcome! It is nice when you need to quickly crop or trim something and don’t want to launch a video editing app.
The repo is owned by a friend, you can leave a star to make him happy :)
People that use GUIs/tools for things like ffmpeg, rclone etc really want the developer to autodetect if they have it already, and use that instead of installing a separate version/binary.
It's surprisingly handy to have something like this hanging around; I just use mine to fix up screen caps.
Commenting mostly because when I did this I thought I was doing something very silly, and I'm glad I'm not completely crazy.