Displayport , HDMI , D-Sub , DVI ... every single way of feeding imagery to one's display device requires a "signal" at a specified frequency.
This concept is a remnant from the days the first television signals were broadcast and we have been dragging it along with us up to our current modern LCD , LED and OLED display devices.
It is a bane , a bottleneck we force on ourselves.
Most of the people who use monitors on a daily basis don't ask too many questions about how they work , let alone how they could work better. Those that do , like me , usually quietly hope things will change for the best . This time however , I decided to say something.
For some reasons I'll go into here (but there are a lot more) , we need to step away from the whole fixed timing per frame concept.
It is technologically very feasible , to send a single image to a display device , and in the first case at the end of the data , specify the device how long that image should be displayed for in nanoseconds , doing away with the whole frequency legacy.
For buffer-able data more frames can be sent until the display device is unable to store more.
In a second case a frame can even be displayed for an unlimited amount of time until the next one is sent to the display.
What benefits would this bring ..
For video , we have mixed frequency content (variable frame rate VFR[*]) Where some bits are 30 fps , and some bits are 24 fps. currently , a display is "opened" at either 30 , 24 , 30 or even 120 hz. At 30 , the 24 fps content gets an added frame every 4 frames , at 24 , frames get dropped , at 60 we have frames either being displayed twice or three times , at 120 , the 24 fps stream has it's images displayed 5 times in a row. At lower frequencies due to the limited amount of frames , losing one frame every 5th of a second or anything in that order creates very visible stutter on anything that scrolls , on the other end of the spectrum , where we have an 120 hz signal being sent to the display device , there is no real visible stuttering , even when for sync reasons a 6th frame is added or the 5th frame gets dropped once every minute , but current interpolation technologies that are in our displays really don't like duplicates , generating massive stutter if interpolation is enabled (motion plus , true motion , etc etc)
Even for CFR , Precise timing for video streams is also very hard to achieve with the current "display standards" a true 23.9760239 hz signal is often hard to create without a lot of tweaking of the video card . My current television , a samsung , accepts the signal created this way . My previous one , also a samsung , only accepts 60 hz . Having the ability of how long a frame can be kept on screen without that HZ notion being there can in this case make for perfect sync without any tweaking. The length the people who make video players go through to make these things work somewhat acceptably is beyond herculean. Even with the best efforts from programmers put in place , the bulk of people watching video have to deal with either a frame drop or a duplicate frame every so often to keep up sync.
As for games, currently the fixed frequency model generates "tearing[*]" unless one selects to enable something called "vsync" where the next frame isn't being drawn unless it is in sync with the display frequency. There are some workarounds for this where situaltions could arise where your GPU's capacity would otherwise be halved. Tripple buffering creates a constant backlog of 3 frames , causing a very very slight delay in response , but makes it so one's gpu is used to it's fullest. So using a tech where there is no sync , a gpu can take the time it needs to generate a frame , then send it to the monitor when it's done. The monitor displays it until it recieves another one. Up until the point the screen can't display any more , or the gpu can't generate any more , everything would scale perfectly.
Existing video in legacy standards would also benefit a lot from this . If something only exists in a certain format , currently that format would have to be edited , modified and re-encoded to be able to be viewed on more modern displays.Abolishing a fixed HZ tech would make mixed content a simple achievement, the display deciding how it displays it depending on it's capabilities but the "signal" staying the same.
Current signals , be it composite video , HDMI , d-sub , all have the philosophy behind them that they would / could potentially be recorded. We are past that , we need optimal communication between our displays and our display processors . Do away with a forcefully implemented bottleneck. Recording of course could be seen as a drawback this whole concept has . Current recording "equipment" needs this kind of fixed predictable format , recording a frequency-less output would be akin to capturing a data stream , and would need a computer equivalent. Most modern equipment is just that of course , but I could understand an argument like this - although being a little past it's expiration date - being put forward. "negotiation" "handshaking" information between the display device and the sender would also not be recorded. But all in all we are past the recording age , it's all data on a storage device these days and moving this data around. Tivo , digital tv etc it's all data on storage devices..
As for actual cabling IMO a standard should primarily be made on an abstract 2 way communication model . We have enough low latency hardware standards that can carry data between our monitors , TV's , GPU's , video decoders etc. Current HDMI 1.4 is about 10 gigabit , data transfer over short distances of up to 10 times that over copper are readily available these days[*]. Using a current network implementation instead of designing new and exotic connectors would probably even be the better move.. The technology is there either way. Displayport , although still sounding exotic to some of us , has never been anything beyond DVI at twice the speed with a new connector.
Bottom line , along with interlace , fixed frequency signalling should be something we strive to eliminate , the data packages our "computers" use to interact with our displays should be far beyond a recording signal. None of the things mentioned here is sci-fi or future tech. It's well and firmly available today.
Why this blog? I needed to put this idea somewhere .. Instead of using an obscure forum , I'll just put it here . Google will surely stick it in the results of people who are thinking along the same lines and looking something up.
Any feedback is appreciated.
Kind Regards
Seto.