I had some time to look at the scaling filters today. I added the filter and 4x linebuffer changes to the latest release code. The hqx output doesn’t look right so I ported the code (with as minimal changes as possible) to a snes emulator and got it working there. Unfortunately, that didn’t fix much as there was only one minor bug in the implementation of the hq3x algorithm.
The real problem is the comparison logic working on values from the TVP which are not stable like direct digital input in an emulator. With scale, it’s easier to work around than hqx since it does a binary comparison on a smaller set of neighbor pixels to decide to copy a neighbor (lqx does an average) or keep the current pixel value. The comparison can include some margin. The flashing pixels are due to comparison function failing on a set of frames and then passing on the next set of frames. With a small margin the delta between the pixels is small and the change between a comparison toggling isn’t noticeable, but nothing compares successfully and scale doesn’t do anything useful. With a larger margin the delta between pixels is bigger and flashing is noticeable, but then scale starts doing useful smoothing. A hack to fix this would be to perform interpolation at the edge of the comparison function to have the change be gradual and also support a margin.
Hqx also performs a binary comparison, but it does it against all neighboring pixels which feeds into a lookup table to say how to blend several of the neighbor pixels and current pixel together. The lookup table then provides an opcode for the type of blending to perform. So random changes in the comparison make the image crawl a bit and generally result in bad output. I can’t think of an easy hack to fix this.
Ideally, there would be a way to remove or ignore the variations in pixel values frame to frame when it isn’t actually changing. Something simple like dropping a number of the low order bits of the RGB values doesn’t seem to work well. I was hoping the latest release code with the oversampling of the input would improve things, but nothing changed. My current plan is to collect some statistics from static images and find out which pixels tend to change value and what the set of values are. This will hopefully lead to a better set of comparison functions. A frame buffer would help with this, but testing a few rows at a time is still useful.
Any ideas or suggestions are welcome.