sampling phase auto-sync

NewHome Forums OSSC & OSSC Pro OSSC – Discussion and support sampling phase auto-sync

Viewing 14 posts - 1 through 14 (of 14 total)
  • Author
  • #10496

    I’ve been thinking of ways to make the sampling phase selection more automated. I started implementing something and realized it may not work very well. My current plan is to:
    – Rely on the user to display a static image which is sensitive to jitter (main menu, pause screen, etc).
    – Upon activating the auto-sync, sequence through all possible phases using the CPU and request a hardware checksum of a sampled frame. Collect multiple samples per phase.
    – Look across all phases and pick the center of the “eye”. Minimize largest checksum delta?

    Ideally, the static image would be known and have a checkerboard-like pattern that is sensitive to jitter, but I can’t think of a way to do that. It would also be useful to make frame to frame comparisons. That doesn’t seem like an option without sampling a reduced set of pixels at a time which could take a while.

    Checksum is a poor way to compare frames. Perhaps a histogram would be better? It would be very useful to incorporate pixel location into the hash. Maybe compute a separate checksum for odd and even columns? I also wonder how much other slight variations in the pixel values is going to a base error that will make the actual visible error insignificant.

    Another other idea is to have the FPGA locate sensitive areas by watching groups of pixels at a time to see if they change. Once that’s found, the CPU can cycle through the phases and have the FPGA report back the largest delta changes. This removes the errors introduced by trying to hash a frame, but may miss problems.

    Any suggestions or problems I may have missed?


    I think a problem at the moment is that the phase setting is not very accurate. When changing the setting you often have to change it back and forth multiple times to see a change. Or it loses the phase lock when the video mode changes.

    Marqs said that its a TVP7002 bug with the PLL x2 config its using now and that it might work better with the standard PLL config in the optimized linetriple modes.


    I never took notice of what phase values worked best. Is it that there is always one good value (ignoring the bug) and because of how the DIV2 logic works it pick one of the two edges when dividing by 2 resulting in a possible 180 degree phase change in the generated clock?

    I assumed the ideal value was random and needed to be discovered.


    Well in my experience the Nintendo 64 has the sharpest picture around 90 degrees phase with 387 sampling rate but with the SNES I can get a sharp picture at 256×240 optimized with just about any phase by just alternating between 2 phases.


    For me, the phase changes on snes are most noticeable with high contrast, small text. Here’s a video where I cycle from 0 degree to 347.5 degrees in ~0.5 second increments. You can see a lot of shimmering on the yellow text near the beginning and end of the video. The center of the eye is around the 12-14 second range in the video which is at 180 degrees. Turning on the LPF seemed to help make the best setting more predictable.

    I’m not sure what causes that pixel shift on certain rows as it cycles through.


    The SD2SNES menu uses the high resolution 512×224 mode for the font so that might mean you need a more specific phase setting.

    By the way didn’t you lose sync sometimes when you cycled through the phase settings or is that just my monitor?


    The phase seems to jump (180 degrees?) in the optimized modes with every few changes in values. I read another post about PLLDIV2 bug related to the sampling phase. The tvp7002 documentation says half of the sampling phase values are not valid with post divider, but that’s all it says about it. Maybe it isn’t known which settings are invalid until a relationship has been established.

    I tried a few things:
    – Automatically advancing +1 and then back to +0 of the current phase stting after sync is established. Also tried +2 and +16.
    – Button to do the same as above.
    – Reset to divide by 1 and then back to divide by 2 after sync is established.
    – Re-use the line buffer to record values from a specific row on multiple frames and diff them to measure quality of image.

    The button was the only thing that really worked, but it was not reliable. Sometimes it didn’t work on the first press. Resetting the video source, resetting the ossc, or advancing/returning the phase all seem to randomly cause it to get a bad phase after sync. Whether the phase is bad or not seems to correlate with a desync. It consistently goes either in or out of phase every time the red LED lights up when cycling through the phases using the menu. But after power cycling one of the devices it could be either in or out of phase.

    I don’t understand the relationship between phase selection and PLLDIV2 well enough to know what else to try.


    Could you try to remove MODE_PLLDIVBY2 in video_modes.c from the optimized modes to see if its more reliable?

    Here is a method for auto phase from TI.

    Basically though there are 2 steps…

    1) Sweep the AVID start/stop so that the starting position starts sampling the video before the expected location then monitor the captured frame buffer until you see a pixel value other than black(ish) appear in the first column. Another way to effectively do this is to start capturing early then analyze the captured frame buffer to look for the first non black(ish) pixel, then subtract this offest from the start/stop registers so that the first non-black pixel then appears at the start location.

    2) Sweep the sampling phase through all values whilst looking at the first non-black pixel. You should see the captured pixel value cycle through from black to the max pixel value. Do the same on the last pixel so that you find the corresponding max value to black. Then you will have a phase profile showing all phases and corresponding samples. You then set the sample phase to the setting which corresponds to the mid point of the max/min value (remember that this is basically a circle)


    I remember trying to remove DIVBY2 before and not being able to find a phase value that reduced jitter to an acceptable level. I gave it a try again and was able to find a setting that looked good this time. With divide by 1, changing the phase and resetting either the ossc or video source doesn’t have the random phase change problems after a desync.

    The red LED is now constantly lit which is probably due to a clock timing problem this change introduces. Unfortunately, I know very little about PLLs and clocking.


    Ideally sampling clock and phase would be automatically detected, but in reality I’ve never seen an implementation that would have worked that reliably unless used with static hi-res picture. To get an idea how monitors do auto clock/phase for VGA, I recommend checking this filing.

    With TVP7002, a big issue is that you need to use H-PLL post divider with lo-res sources to keep PLL internal frequency high enough – otherwise significant jitter may occur. It wouldn’t be a problem if DIVBY2 was implemented properly, but it seems that divider output is not aligned, so basically you randomly get 0 or 180deg shifted signal every time after locking. One solution could be sampling at 2x rate without DIVBY2, and then handling clock division (dropping every other sample) using FPGA. Alternatively, it might be possible to improve jitter performance by tuning loop filter component values. Both ideas have been on my todo list, but I haven’t had time to look more closely into them yet.


    I’ve been trying to implement the sample skip method for solving the 180 degree phase shift problem, but am having trouble debugging my changes. What happens is half the time I get a good image and the other half signal is lost when the pll loses lock after changing sampling phase or after reset of the OSSC/video source. Without my changes it would have been 180 degrees out of phase instead of losing signal.

    What I changed:
    – Added a flag to double sample rate and clear the DIVBY2 bit in the TVP.
    – Doubled the input clock frequency on the 3rd PLL and divided all output clocks by 2. I’m only modifying linex3 M2 and M3 right now. Added a 1/2 output clock to simplify the clock alignment calculation.
    – Skip advancing line buffer write pointer every other sample. I don’t think I need to do any alignment since it start with either the first or second of the paired samples. It doesn’t matter which one.
    – Synchronize sync signals by checking both current and delayed version since input clock is now 2x.

    My guess is when the phase change happens I’m not generating output sync properly. I’ve been slowly adding debug information to the LEDs which is a slow process. Both hsync and vsync are still generated when it loses signal. I also have an entry level 2-channel oscilloscope that probably doesn’t have the bandwidth to measure most of the video signals. Although, it might handle sync. I haven’t tried it yet.

    Any suggestions for what to look at? Is there a better way to debug this?

    EDIT: Looks like it had something to do with how I was stalling the write pointer. It works now, but I’m not quite sure why. It’s not a very satisfying solution.


    It might be easier to divide incoming pixel clock by 2 as the first thing on FPGA using a basic flip-flop divider, synchronizing latched TVP7002 outputs to this generated output clock. The divider should be made so that output rising/falling clock edge is aligned to latched hsync edge based on user preference, basically giving coarse 0/180deg phase select option (unlike TVP7002 which does it randomly with DIVBY2). With this method you don’t need make changes to PLL settings or actual scanconversion code.

    As for FPGA debugging, it’s easy to route (directly or using SignalProbe) sync signals etc. to led/sd pins (as you seem to have done already) and then probing them with scope.


    Generating a divided down clock prior to the PLLs and aligning based on the divided clock is a much simpler solution. I like it. Although, I was unsure whether a flop based clock is safe to use. I gave it a try and got an error:

    Error (15065): Clock input port inclk[0] of PLL "scanconverter:scanconverter_inst|pll_3x:pll_linetriple|altpll:altpll_component|pll_3x_altpll:auto_generated|pll1" must be driven by a non-inverted input pin or another PLL, optionally through a Clock Control block
    	Info (15024): Input port INCLK[0] of node "scanconverter:scanconverter_inst|pll_3x:pll_linetriple|altpll:altpll_component|pll_3x_altpll:auto_generated|pll1" is driven by pclk_05x which is Q output port of Register cell type node pclk_05x

    It looks like the Altera PLLs only take an input from a global clock pin or the output of another PLL through a control block. Perhaps there is something I’m missing.

    I can use a divided down clock to latch the values inside the scanconverter block and modify the PLLs to provide divide by 2 clocks. That’s still a lot simpler than what I currently have.


    Now that you brought up the error, I recall also bumping up to the same constraint while trying to make a quick try on this long time ago. As you suggested, it’s possible to use a single PLL for generating DIV2 output (for re-latching), and 3x etc. clocks for actual processing.

Viewing 14 posts - 1 through 14 (of 14 total)
  • You must be logged in to reply to this topic.