Wouldn’t display signals be much more efficient and cleaner if luma/brightness was separated into subpixel targets, using a base value, an exponent, and a lower-precision relative target? IE, you send 255 so the pixel starts white, -3 on a scale of -8 to +7 so it gets more rapidly darker before evening out close to the target value, and a final value of 12, representing the target is 2/3rds white for the final pixel, resulting in a smooth subpixel transitory curve in 16 bits of data. Then, send either the two chroma components ala television, however that works I forget, or hue and saturation values staggered as 9 and 7 bits respectively, or maybe even 10:6, to give more weight to color hue variation than variations in intensity, inferring a degree of expected intensity range from the lightness values. Wouldn’t this make much more sense with modern subpixel LCD display technology, allowing for any subpixel layout to have effectively built-in cleartype whatever junk at the display level, for every element, rather than the janky hack that we have today? And perhaps even driver/software support for more exotic or weighted subpixel layouts we’re seeing with OLED and other emerging display technologies?
And wouldn’t it also make sense to have temporal chroma subsampling for certain deployments, like professional illustration displays, where the image is often largely static and variation in the image comes mostly in the form of changing brightness values, or is strongly accompanied by changing brightness values, which the eye can more quickly and accurately detect than changes to hue or especially saturation? Especially since the brightness value of a pixel is one third of the data sent, meaning you can effectively increase black and white bandwidth 4x by reducing color bandwidth by half? ie, a 240hz drawing tablet, but the colors move at 30hz, vs a 60hz drawing tablet. I would imagine non color-critical fields could appreciate this approach even more.
Why wasn’t this all done in the CRT days, where it also made sense to do? It’s so obvious that everyone must have already thought of it by now, especially with chroma subsampling being as old as color television.
I must be missing something, so could someone please explain to me why I’m stupid and this is a dumb idea?
i don’t really understand, what is so unclean about display signals? For displayport most are just a representation of RGB for each pixel. Sure if the computer knows what the subpixel layout is it can target subpixels, but essentially can already do it because it pushes information, for each pixel, like cleartype. For the rest display signals are already a lossless way to transmit information, unless you are asking a display to do anti aliasing but i’d rather let the computer do that.
Cleartype needs information on which weighting is correct, and only works for black and white. I’m suggesting something to separate lightness from color so that it works color-agnostic, without any software intervention. Cleartype gets embedded in screenshots and can only apply to system rendered fonts, and nothing else. It’s a terrible mess and barely functions on RGB horizontal triad displays, and nothing else.
I’m sure there’s some reason why this is a stupid idea, I’d just like to know what the reasons are, because I’m too stupid to see the problems with my idea.
Some of your suggestion is a transfer-issue and some of it is a display-driver issue. The display-driver is the unit to make the call on how bright what (sub-)pixel has to be to represent what was sent to it.
VESA spec for Display Stream Compression, starting page 25 goes into detail on how to reduce the amount data to fit through the cable.
I’m talking about essentially sending a higher resolution B&W image, possibly sampled in accordance with subpixel physical location for rendered elements(text, 3D game), and lower depth color data to be mapped to the grayscale image, allowing the subpixel precision effects of truetype to be applied universally without needing to reinvent truetype, and without issues of embedding it into screenshots.
It would be a shift in terms of how computers display, yes, and would require software to calculate and render colors differently to take full advantage of it, but that basically already happened repeatedly almost all the way anyway in the early color television days.
There must be some reason it wasn’t taken to it’s logical, sensible conclusion, and I’m clearly too stupid to see why that is. It’s not about displaystream compression or RGB signal at all.
you’re talking about a higher resolution signal with compression on that. So it is an alternative to an RGB signal. Essentially what you are saying is to send a higher resolution 4:2:2 chroma signal and then have the display scale it down again.
The reason why it isn’t done now? It adds latency, you need more processing power in displays with good scaling software, especially when it is subpixel aware.
The people that want really clear text already have a solution, a high ppi display. Then you are sending a higher resolution image and the display just shows it!
It really seems like it aught to be very low latency, though? Since it can be applied at the last mile via analogue signal mixing. I mean, like I said, this is basically a variant on how NTSC color TV signals worked. Zero added latency there, and it applies to more than just text.
It just seems so obviously useful, to save on color bandwidth to expand grayscale bandwidth by using compressed subpixel resolution via curve hinting rather than ugly 4:2:2 chroma subsampling across multiple entire pixels.
A 25~33% increase in required display bandwidth for an effective 3x horizontal resolution, just factoring horizontal subpixel cluster layout, just seems really obviously a better solution than 200% increase in bandwidth to do the same? And when factoring in that the display could declare it’s subpixel orientation pattern and have the computer sample accordingly, and that this could even be used to increase resolution of 3D games, illustration software, and could allow screenshots of RGB displays to display correctly on BGR displays, etc etc…
I still don’t see why this isn’t the norm for display technology today. I think there must be some other, better reason why it’s not.
There’s no point worrying about sub-pixels when pixels themselves keep getting smaller on a regular basis.
At the start of the CRT era everything was analogue and pixels didn’t even exist, let alone sub-pixels.
Consoles preceded personal computers, but since view distances were multiple metres pixels didn’t matter. The bottleneck in resolution was the graphics circuitry in the computer anyway. Specifically (V)RAM.
In the 90s the personal computer came of age, and sufficient people were glued 50cm from their screens for eight hours a day for the issue of pixel density to finally be a thing. People could, in certain circumstances, finally see the RGB sub-elements that made up pixels. (Usually after they sneezed onto the screen and covered in with lots of small magnifying lenses, but hey.) The actual problem was that video resolutions were so poor given the size of the screens — we watched 320x240 videos on 15" screens on a regular basis in the mid 90s, you may recall — that the bottleneck was in the media being generated, so that received the focus and rapidly developed.
A small window of opportunity existed for what you are talking about from 2000-2010, but as consumption patterns shifted to the Internet, broadband speeds ended up being the limit to perceived quality.
Then in 2010 the Retina Display made its debut on the iPhone 4. Pixels so small you can’t even see them. From that point on it was all irrelevant. If you can’t even see pixels, then sub-pixels are meaningless for the consumer market. And there’s no point developing and implementing technology that won’t even be noticed by the market paying the bills.
tl;dr: We’ve never really had a period where the cable has been the primary bottleneck, so conserving bandwidth would never have yielded any benefit worthy of a marketing bullet point. Assuming that the display you are using has a high enough pixel density, then sub-pixels and any tech that performs fancy tricks with them are completely moot. We have lived in that (HiDPI) age for quite a while now. The LoDPI display market is dying… and related tech (even things like bog-standard anti-aliasing) will die with it.
Aren’t we presently constantly running against the physical bandwidth limits of copper in our display cables right now? Displayport being a great example of flake, with some compliant GPUs failing to transmit a clean enough signal for certain displays, or over certain cables, with each step in the chain being technically compliant, but the final result just being garbage.
Having used a Cintiq Pro 16 and fought with GPUs of the era, running against resolution and refreshrate limits of the display standards of the time… It really feels like the display signal standards and the physical hardware to send that data at such high bandwidth is the current bottleneck, and anything to increase the efficiency could easily increase the effective resolution and bandwidth of displays substantially, leapfrogging existing displays.
I mean, with density, we have 4K displays that are tiny, and if we can cram that many pixels into that small a space, we can certainly make a bigger 8k or 16k display. But, getting that many pixels with a decent refreshrate is the difficulty. The compute is just a matter of what you’re running, high-end modern games struggle, but most productivity software… well, either performance isn’t the most important thing, or you can scale it quite high already…
For network videos, or broadcast television, this isn’t really relevant at all. I’m thinking more professional displays, or PC enthusiast displays, for people who really want to push the limits of technology. But, that may be a fair point and why I’m stupid; professionals that could benefit from this and have enough awareness to realize it’s possible are too small a group, and we must be told what to think, not how to think.
Thanks everyone, i think I have a better view of this now; it’s just a large change and nobody is willing to invest in such a change, because the market doesn’t realize it exists and likely won’t even if it’s there, staring it in the face.
The market is pretty reasonable in this i think. What you are proposing is really expensive and not the way it would be done. Display processing is not analog because that is too expensive to build. Where analog output is needed, very often is done in the last stage with something like PWM.
The solution of bandwidth is already solved with displaystream compression that is already problematic even while it is a standard.
The solution of additional sharpness is solved with higher resolution displays, but most people don’t really need it.
To put it in a shorter way, your solution is technically not the best solution and would need more expensive processing and hardware on both the gpu and display side. (sending analog signals in a serialized way means more data or different timing that needs different processing)
Stuff a bunch of Coax into one cable sheath and the bandwidth available shoots through the roof.
The cable will be about as flexible and as big as a garden hose, but 160Gbit is achievable in that sort of package.
When you’re in the minority, that’s the way it is, unfortunately.
Probably over 99% of desktop use falls into one or more of these three categories:
Watching videos using a streaming service
Playing computer games
Doing education/work with an “office” suite of software
When streaming videos, the bottleneck is usually the Internet connection. When playing games, the bottleneck is usually being able to generate frames at the desired rate. When using office programs (and social media programs) bottlenecks are a moot point because demands are so low. So, for >99% of users, the cable isn’t the bottleneck. For the mass market, the cable isn’t the bottleneck.
Far less than 1% of the population will be pushing the video cable and/or protocols to their limits. Profit-driven corporations have little incentive to marginally improve the lives of those people. There just aren’t enough of you to make it worth their while.
So I think “the reason” has more do with market forces and corporate greed than technical capability and sound reasoning. Your idea would work, but unless and until a corporation works out a way to make a profit from its implementation, it just won’t happen.
Display processing in LCD is always analogue at the last mile, because sticking a digital processor inside of a pixel is stupid, and moving anything physically in a digital manner does not exist anyway. We use digital signal between the output and the display to eliminate wire noise, as wires are essentially giant antennae, but your display still moves the liquid crystal with physical voltage via analogue signal.
It’s actually much cheaper than digital, and the only reason it’s not used is because of noise/reliability. Digital is on or off, so it’s easy to say the state us set in steps. Analogue is everything in between, and relates to the physical characteristics of the input or output. It cannot be removed from the equation because the digital logic must interact with the physical world to have any reason to exist.
Maybe less applicable to LED as that’s determined by on and off time, but the analogue component is still there in the time spent factor. It’s not just separating and sending raw pixel bits to the subpixels to process into light for your eyes or something, it’s always converted at the last mile to some kind of analogue signal because it absolutely needs to be without any possible exception that exists in reality.
Even if you send digital audio over bluetooth, it still needs to be converted to analogue voltage to move your drivers. Even if the display signal is sent digitally over HDMI, it still needs to turn into analogue voltage to move the subpixels.