Video Editing in Groovy (or Not?)

Well, I was going to work on writing something that would preprocess a video file and calculate exactly where all the cuts were and what the intensity and “bend” were at any given time, and I ran into a roadblock.

Groovy is an awesome programming language. I want to program in Groovy as much as I possibly can. And so I found Xuggler (a SWIG-based Java interface to ffmpeg), and I wrote a convenient Groovy wrapper class for accessing frames in an ffmpeg video stream.

It was great. I could load a file, get all the frames I wanted one at a time as Java BufferedImages, draw on them using Java2D, and write them back out in an image sequence. (I figure I can probably export another movie file, too, but I haven’t tried that part of Xuggler yet, let alone wrapped it.)

Now I actually needed to process those images to find motion in them, and so I set up a simple image difference filter in Groovy, and I tried it out. It was slooooooow. It processed about one frame per second. I realized Groovy would be slow, but I hoped it wouldn’t be that slow.

Not to be discouraged, I ported the pixel-pushing code to Java, and it was still a bit disappointing. I haven’t done any actual timing yet, but this time, it seemed to be processing 100 frames per second. I knew I wanted MVTron’s video preprocessing stuff to be open-minded to miscellaneous extensions (like lip-flap detection for lip-synching support), so I wanted its core to be way faster than this. And I knew that meant turning to Lisp or C.

Holding that thought, I knew I’d used AviSynth scripts for video reformatting up to this point, and I wondered just how well I could preprocess videos for MVTron using AviSynth scripts alone. The answer seems to be that they’re really awesome but not quite awesome enough.

I ended up with a script that takes a clip and uses AviSynth’s built-in WriteFile functions to output two files, one describing the scene breaks of the clip, and the other describing the intensity of motion in the clip at any given non-scene-break moment. (In the meantime, it blurs temporally within scenes so that motion in animation works more like it does in live action.) In order to have it tell me the “bend” and variance of the motion, though, I might have to write an add-on function or two in C. Hey, at least I can do that!

So, you might be wondering, if I’m doing so much in AviSynth scripts and C, what happened to Groovy? Actually, I had convoluted plans for AviSynth earlier this month, and they were interrupted when I stumbled upon Xuggler. Today I ironed out most of the bugs, and I seem to have have a way to call AviSynth script functions from Java. All the .AVS files and intermediate text representations can be done away with. All MVTron needs from AviSynth can be done from Groovy now, with the occasional C implementation thrown in.


2 thoughts on “Video Editing in Groovy (or Not?)

  1. hey there; not sure if you tried Xuggler 2.0 yet (came out on April 2nd) but it now uses assembly operations to optimize converting between YUV420P and Java BufferedImages which might speed things up a little for you.

    – Art

  2. Whoa, cool. I’ll be sure to check that out. One thing my Java-AviSynth solution needs to do but can’t do yet is exporting, and I’ve been planning to do that by converting AVSFrames to RGB (using one of AviSynth’s own conversion methods), wrapping the RGB data in BufferedImages, and finally encoding a stream of those with Xuggler (as YUV again, lol). I haven’t ironed out the details exactly yet, and maybe Xuggler 2.0 will make it a less convoluted process than I expect it to be.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s