Sega 3D Games


Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
  • #12500

    Sega’s 3D glasses contained two LCD shutters, on for each eye. While the CRT was drawing the odd scanlines one shutter would be closed and while the even scanlines were being drawn the other shutter would be closed.

    Today there are TVs that support digital 3D signals from Blu-Ray players. Could the OSSC output a digital 3D signal when playing Sega 3D games?


    For passive 3D (most TVs) it would have to draw one frame on half the screen first then the other frame on the rest of the screen, would only be the vertical mode 3D and frame rate would be half but technically might be possible. Not sure how the timings would interact to though, but if you managed to get the signal to be accepted the TV then does the heavy lifting of “making it 3D”.


    +1 would love this feature !!

    It’s not only about Sega 3d games (eg OutRun 3D on the SMS): there are also Famicom 3D games (eg Rad Racer), 3D DVD movies…

    Converting 3D frame sequential (for progressive input material) (or field / line alternative for interlaced input) to HDMI top-bottom 3D seems in line with ossc’s zero latency.

    With a 60hz input, the output would be two pictures with half the normal vertical resolution one above the other, either at 30fps fully updated, or 60fps with only one of the two pictures updated alternatively

    It would be a unique feature !!


    Second thought: HDMI 3D Side by Side would be easier to output with zero-lag
    Is the source code available somewhere ? I could contribute to help developing this…


    Source code is here –

    Go for it 🙂


    Thanks. Went through it quickly, impressive job !


    Not used to HDL programming…


    After having a much closer look at the whole source code, I’m afraid there doesn’t seem to be an easy tweak to achieve this with v1.6 hardware…

    Let’s assume we have a 3D frame sequential 240p60 input material (or, very similarly, field/line alternative 480i60).
    We get, sequentially 240 lines for the left eye, and then 240 lines for the right eye.

    I see two ways to generate a lag-free output usable by modern 3D renderers.


    The idea is to output 480p30, with the top 240 lines coming from one input frame (1/60th of a second), and the 240 bottom lines coming from the following input frame (1/60th of a second). Total takes 1/30th of a second.

    – After outputting the first 240 lines, the first line of the following input frame is not available immediately. A solution would be to buffer the first lines of the first frame, and to start the output with a slight delay, corresponding to the delay between the two consecutive input frames, but to do this we need a buffer, and it introduces a slight lag.
    – Needs to have an output timing different from the input timing, is it possible with the current architecture ?


    The idea is to output 480p60 with:
    – even frame: left half = current input (line-doubled) (and simultaneously saved to L frame buffer), right half = coming from R framebuffer
    – odd frame: left half = coming from L framebuffer (=previous input frame), right half = current input (saved to R frame buffer)
    Zero lag. But we need a large buffer (even if L and R can be optimized to a unique framebuffer). Today’s 1 line buffer is not enough.

    There’s probably no room for a full framebuffer on the FGPA…



    Thanks for looking into this. What a bummer, that would have been an amazing feature to have! I wonder if anyone has done it for VR glasses yet? The Sony PSVR takes a HDMI in/out, could the system be pushed to it and the image split in two halves of the screen?


    From what I’ve read, Sony PSVR takes 3D top-bottom and side-by-side input only, not frame/field alternative material. So we still need to convert our old sources to T&B or SBS…

    Our last hope is that Marqs adds a framebuffer in the next HW revision of OSSC. Which would be useful for other purposes as well, like more advanced (but still lag-free) desinterlacing (weaving/blending/smartbob…)

    Let’s launch a petition 😉


    Here’s a video showing clearly the frame sequential 3D rendering of the Sega Master System


    HDMI supports “Frame packing” 3D format which is basically same as frame sequential. However, the displays supporting 3D HDMI formats probably require exact timings to be able to recognize and decode the signal.


    Hi Marqs, yes Framepacking would be another way to go, but it looks like top-bottom with a 45-line blanck between the to pictures. Didn’t do the maths, but this seems to be a too small gap to allow the second input frame to come in time, so it doesn’t solve the timing problems of the top-bottom solution.

    The more I think about it, the more I realize SBS is probably the best way to go (perfect sync with the input, no lag), but still it requires a buffer…

Viewing 13 posts - 1 through 13 (of 13 total)
  • You must be logged in to reply to this topic.