I ported the scene-detecting part of my AviSynth script to Groovy (using the AviSynth wrapper I was talking about yesterday), and it was still much slower than I’d have liked it to be. It paused every once in a while, probably to run a garbage collection pass or to find more space to allocate frames in. In the hopes that I could avoid this problem, I spent a few days to rewrite the library to be a three-layered system that was operable first, memory-safe second, and easy-to-use third.
It had previously been a two-layered system that was a low-level library first and a high-level library second. The middle layer is what was new here, but its arrival pretty much forced a complete refactoring of the two layers that surrounded it.
In any case, I think this was an important step to take, but it didn’t help the speed at all. So, I tried a few optimizations of the ported scene-detection script itself. First of all, I took away the kludge I was using to represent the boolean either-this-is-the-start-of-a-scene-or-it-isn’t stream. Rather than representing true and false with white frames and black frames, as I was forced to do in AviSynth, I represented them with, well, Groovy’s true and false. Instead of having the function return a Clip, I had it return a Closure (which itself would take an int and return a boolean). This did the trick. I took away six of the filters I was using in the script, and the speed improved markedly.
Once I put in some caching for the intermediate frames, the speed improved again by about 30%. Finally, I thought I’d push the limits of Groovy optimization a bit more by implementing it in Java. There was practically no improvement. Oh, well.
In the end, it tends to take about twice as long to process the scenes as it takes to actually play the movie. That’s still somewhat dismal, but I think it’s good enough for now. Maybe once I have a complete working prototype the speed improvements will follow.