
Video bandwidth is precious. In particular the HDMI standards seem to lag in providing the bandwidth I need. Partly because I run old hardware. One silly wast of bandwidth is blanking intervals. That mattered for CRTs since steering the electron beam took time. It should not matter for LCDs. CVT-RB (Coordinated Video Timings - Reduced Blanking) is a standard for reducing the blanking intervals. It sometimes makes the difference for supporting UltraHD. <https://en.wikipedia.org/wiki/Coordinated_Video_Timings#Bandwidth> I haven't yet fiddled with this. Perhaps it is currently optimal on my systems. But I have had mysterious troubles getting UltraHD working on some systems. The linux cvt(1) command has a flag --reduce: Create a mode with reduced blanking. This allows for higher frequency signals, with a lower or equal dotclock. Not for Cathode Ray Tube based displays though. But I don't think that command is used in my (automatically generated) configurations. Does Linux automatically know when to use CVT-RB? If so, how does it know? If not, how do you tell it?

On 2023-06-19 14:47, D. Hugh Redelmeier via talk wrote:
One silly wast of bandwidth is blanking intervals. That mattered for CRTs since steering the electron beam took time. It should not matter for LCDs.
That doesn't make sense, especially when you consider how the digital system works, with things like I, P and B frames. https://en.wikipedia.org/wiki/Video_compression_picture_types As you mentioned blanking intervals are a relic of analog TV, dating back before WW2. There is absolutely no need for them with digital TV.

| From: James Knott via talk <talk@gtalug.org> | | On 2023-06-19 14:47, D. Hugh Redelmeier via talk wrote: | > One silly wast of bandwidth is blanking intervals. That mattered for CRTs | > since steering the electron beam took time. It should not matter for | > LCDs. | | That doesn't make sense, especially when you consider how the digital system | works, with things like I, P and B frames. | https://en.wikipedia.org/wiki/Video_compression_picture_types Just because it no longer makes sense (I called it silly) doesn't make it go away. But it may be vestigial. I don't yet know how to control it on my Linux desktops. Is such compression part of what HDMI carries? For computer monitors? Almost all compression used in video is lossy -- not what I want for a computer monitor (My obsolete desktop monitor is a TV set. To get to 4k with HDMI 1.4 (or was it 1.2?), it uses 4:2:2 chroma sub-sampling, a kind of compression. This is looked down upon, to say the least.) ATSC has compression. MPEG-n have compression. H.264 and H.265 have compression. VP9 has compression. Each is lossy. But I don't think that they are what flows over HDMI. (There is compression coming for DisplayPort and HDMI standards to support 8k (DSC: Display Stream Compression). It is claimed to be "Visually lossless" but I imagine that it isn't always lossless for computer monitor use. <https://en.wikipedia.org/wiki/Display_Stream_Compression> ) | As you mentioned blanking intervals are a relic of analog TV, dating back | before WW2. There is absolutely no need for them with digital TV. Except varous things exploited them. Like whacking on GPU registers only during blanking intervals to avoid tearing.

On 2023-06-19 18:04, D. Hugh Redelmeier via talk wrote:
Is such compression part of what HDMI carries? For computer monitors? Almost all compression used in video is lossy -- not what I want for a computer monitor
I don't know the details of what HDMI, but compression would generally be done near the source. I still don't think there are blanking intervals with digital video. However, since you're using an analog monitor, blanking would have to be recreated for the analog signal.

| From: James Knott via talk <talk@gtalug.org> | On 2023-06-19 18:04, D. Hugh Redelmeier via talk wrote: | > Is such compression part of what HDMI carries? For computer monitors? | > Almost all compression used in video is lossy -- not what I want for a | > computer monitor | | I don't know the details of what HDMI, but compression would generally be done | near the source. | | I still don't think there are blanking intervals with digital video. However, | since you're using an analog monitor, blanking would have to be recreated for | the analog signal. There are blanking intervals with HDMI. HDMI is considered digital: the only signals are bits. I don't consider my display to be analogue, but it is arguable. It is an LCD.and has HDMI in. I know that this seems silly. The reasons for it are historical. At no point in the evolution was a blank page employed. At least that's how I understand it. (If I got to do a blank page design, I'd eliminate refresh as we know it. I'd just have a stream of screen updates. I admit that that isn't not the simplest way of transmitting movies. Without care, it might also look weird.)

On 2023-06-19 18:04, D. Hugh Redelmeier via talk wrote:
Except varous things exploited them. Like whacking on GPU registers only during blanking intervals to avoid tearing.
You may recall the Sinclair ZX80, which had a performance mode which killed the video. If you wanted a display, you got much less performance. https://en.wikipedia.org/wiki/ZX80

On Mon, Jun 19, 2023 at 03:11:44PM -0400, James Knott via talk wrote:
That doesn't make sense, especially when you consider how the digital system works, with things like I, P and B frames. https://en.wikipedia.org/wiki/Video_compression_picture_types
Compressed video is not related to how the signal is sent over HDMI. HDMI is sending uncompressed frames (not counting DSC as used sometimes for 8K video on HDMI).
As you mentioned blanking intervals are a relic of analog TV, dating back before WW2. There is absolutely no need for them with digital TV.
Supposedly HDMI is using some of the blanking space to send audio, so I guess it has some use. -- Len Sorensen

On 2023-06-20 09:23, Lennart Sorensen wrote:
Supposedly HDMI is using some of the blanking space to send audio, so I guess it has some use.
Is this documented anywhere? Sure the audio is sent over the cable, but why should there be such a thing as a blanking interval on a digital system? The blanking interval was used to sync the camera and TV. There is absolutely no need for that with a digital signal. Of course there is a sync method with digital, but that could be contained in the data. When you have a digital signal, multiplexing of different data, including sync, is trivial. Read about Real Time Protocol for info on how this is done for audio and video over IP. https://en.wikipedia.org/wiki/Real-time_Transport_Protocol
HDMI is sending uncompressed frames
I have Rogers IPTV. The TV comes over IP via the cable. That would most certainly be compressed, as was the digital TV I had before it. There are HDMI cables between the Rogers box and my A/V receiver and from the receiver to my TV. Are those cables carrying uncompressed video? I doubt it, considering the signal Rogers distributes originated with ATSC from the broadcasters.

On Tue, Jun 20, 2023 at 10:40:31AM -0400, James Knott wrote:
Is this documented anywhere? Sure the audio is sent over the cable, but why should there be such a thing as a blanking interval on a digital system? The blanking interval was used to sync the camera and TV. There is absolutely no need for that with a digital signal. Of course there is a sync method with digital, but that could be contained in the data. When you have a digital signal, multiplexing of different data, including sync, is trivial.
Well HDMI has the audio and video multiplexed in the same signal. HDMI 2.0 and older used 3 data links plus 1 clock link, while HDMI 2.1 uses 4 data links with embedded clocking at up to 12Gbps per link. https://www.fpga4fun.com/files/HDMI_Demystified_rev_1_02.pdf gives a nice explanation of how it worked in HDMI 1.3. 2.1 just got rid of the dedicated clock to free up a 4th signal pair. That was supposed to be a dual link HDMI B connector with 6 pairs instead of 3 but it seems to have never been used.
Read about Real Time Protocol for info on how this is done for audio and video over IP. https://en.wikipedia.org/wiki/Real-time_Transport_Protocol
Sure but HDMI and compressed video of IP have nothing in common really. Vastly different bandwidths and purposes.
I have Rogers IPTV. The TV comes over IP via the cable. That would most certainly be compressed, as was the digital TV I had before it. There are HDMI cables between the Rogers box and my A/V receiver and from the receiver to my TV. Are those cables carrying uncompressed video? I doubt it, considering the signal Rogers distributes originated with ATSC from the broadcasters.
Absolutely it is uncompressed over HDMI. The signal from rogers is compressed and encrypted and the rogers box decodes that and send out the raw video over HDMI (with HDCP protection of course). This is why HDMI is carrying 10.8Gbps or 18Gbps or even 48Gbps for the latest standard. Uncompressed video is huge. -- Len Sorensen

On 2023-06-20 10:50, Lennart Sorensen wrote:
https://www.fpga4fun.com/files/HDMI_Demystified_rev_1_02.pdf gives a nice explanation of how it worked in HDMI 1.3. 2.1 just got rid of the dedicated clock to free up a 4th signal pair.
Other than a small bit about lip sync, there is nothing about syncing the signal in that.
Sure but HDMI and compressed video of IP have nothing in common really. Vastly different bandwidths and purposes.
First off, that RTP article mentions video, not just audio. While the details may differ, the principles remain the same, that is the framing is embedded in the data. Don't confuse transport with signal. In the case of my IPTV, the exact same signal is delivered to my TV, as I would receive over the old digital system. And they both use HDMI to reach my TV. By comparison, consider the audio in cell phones. Way back in the dark ages of "1G" phones, the signal was analog. Then came 2G, with a few different methods (CODECs) of converting the audio to a digital signal, with a major goal being to reduce the bandwidth, to the point where three or so digital calls would use the same amount of spectrum as one analog. The difference between 2G and 3G, which used the GSM CODEC, is with 2G, the data was a continuous stream, but with 3G the exact same audio was transmitted in packets, though not over IP. Saving bandwidth was still a goal. Then, with 4g and lots of bandwidth available, the goals shifted from saving bandwidth to providing a better quality call. IP was now being used to carry the calls. Through all this, the goal remained the same, that is to carry a voice conversation. With the digital systems, the sync was carried along with the call data, even though different CODECs might have been used at different times. Again, there is absolutely no need for blanking intervals with digital TV. I suspect Hugh's question arises because he is using an analog monitor, if I read his post correctly. Then analog framing, including blanking interval, has to be created. BTW, it is possible to have analog video without blanking intervals. Back in the 70s, I used to maintain some video terminals where the sync was fed directly to the monitor, instead of being combined with the video.

On 6/20/23 11:16, James Knott via talk wrote:
On 2023-06-20 10:50, Lennart Sorensen wrote:
https://www.fpga4fun.com/files/HDMI_Demystified_rev_1_02.pdf gives a nice explanation of how it worked in HDMI 1.3. 2.1 just got rid of the dedicated clock to free up a 4th signal pair.
Other than a small bit about lip sync, there is nothing about syncing the signal in that.
Sure but HDMI and compressed video of IP have nothing in common really. Vastly different bandwidths and purposes.
First off, that RTP article mentions video, not just audio. While the details may differ, the principles remain the same, that is the framing is embedded in the data. Don't confuse transport with signal.
In the case of my IPTV, the exact same signal is delivered to my TV, as I would receive over the old digital system. And they both use HDMI to reach my TV.
IPTV is not at all like HDMI. The give away is with the IP in IPTV. HDMI does not require any of the overhead of IP because it is a direct connect so there is no need for IP addresses, MAC addresses or ports. IPTV has the video stream compressed using one complex codec or another whereas HDMI has close to 0 encoding so that it can run on fairly in-expensive hardware. The fact that most TVs now days have powerful processors has nothing to do with where HDMI came from as a way to display high quality video but adding in the encryption to make it hard for people to easily decode the video stream and publish it on the internet. The bit rate going over HDMI would be something like: Image height * Image width * pixel size * frame rate. So for an example:2048*1024*24*60 = 3,019,898,880bits/s Add on to that the overhead for some number of audio channels. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

| From: Alvin Starr via talk <talk@gtalug.org> | The bit rate going over HDMI would be something like: | Image height * Image width * pixel size * frame rate. | | So for an example:2048*1024*24*60 = 3,019,898,880bits/s | Add on to that the overhead for some number of audio channels. That's an odd resolution. Of course that makes your point clearer: it's just arithmetic. But of course it is not. There is a certain amount of extra jiggery-pokery added. My first post on this topic included a link to a table of bandwidths, taking into account blanking regime. It also says how to adjust for sizes of different pixel encodings. <https://en.wikipedia.org/wiki/Coordinated_Video_Timings#Bandwidth> Is there a reason for them writing bits per colour as "bpc" instead of "b/c"? I think / is clearer than "p" in a unit.

On Tue, Jun 20, 2023 at 03:04:38PM -0400, D. Hugh Redelmeier via talk wrote:
That's an odd resolution. Of course that makes your point clearer: it's just arithmetic.
But of course it is not. There is a certain amount of extra jiggery-pokery added.
My first post on this topic included a link to a table of bandwidths, taking into account blanking regime. It also says how to adjust for sizes of different pixel encodings.
<https://en.wikipedia.org/wiki/Coordinated_Video_Timings#Bandwidth>
Is there a reason for them writing bits per colour as "bpc" instead of "b/c"? I think / is clearer than "p" in a unit.
b/c could be read as b divided by c. After all we use ppi for pixels per inch. -- Len Sorensen

| From: Lennart Sorensen via talk <talk@gtalug.org> | b/c could be read as b divided by c. After all we use ppi for pixels | per inch. "per" *is* a division. Of units 10 b/c * 3 c/pixel = 30 b/pixel 300 p/in * 8.5 in = 2550 p This kind of operation on units is called dimensional analysis. It is a very powerful tool. Not just for physicists.

On Tue, Jun 20, 2023 at 11:16:05AM -0400, James Knott via talk wrote:
Other than a small bit about lip sync, there is nothing about syncing the signal in that.
Well https://www.youtube.com/watch?v=5acgSK0kWTE has a lot of details on how HDMI works. I don't think it says how audio and video is synced, although given they are being sent interleaved, it probably isn't very complicated.
First off, that RTP article mentions video, not just audio. While the details may differ, the principles remain the same, that is the framing is embedded in the data. Don't confuse transport with signal.
In the case of my IPTV, the exact same signal is delivered to my TV, as I would receive over the old digital system. And they both use HDMI to reach my TV.
The IPTV goes to some device that decodes it and converts it to uncompressed video frames and audio. HDMI just carries uncompressed video and audio (either LPCM or some other digital audio format) to the TV. Of course if the TV itself runs apps, the decoded video doesn't have to go over HDMI unless the TV internally is also using HDMI from the processor to the video handling (which is actually commonly how it is done). If you have an ATSC tuner, the signal received is decoded and decompressed and again sent as uncompressed video to be displayed. HDMI has no involvement in MPEG or ATSC or IPTV or any other compressed video system. It only cares about RGB (or YUV) video data and audio and a few control signals. For some reason HDMI (and the compatible DVI) decided to keep using CVT video signalling complete with blanking intervals, although using reduced blanking (CVT-RB and CVT-RBv2), and they put the audio and some other extra bits into that part of the signal. Maybe this was so analog displays could also work with DVI and HDMI?
By comparison, consider the audio in cell phones. Way back in the dark ages of "1G" phones, the signal was analog. Then came 2G, with a few different methods (CODECs) of converting the audio to a digital signal, with a major goal being to reduce the bandwidth, to the point where three or so digital calls would use the same amount of spectrum as one analog. The difference between 2G and 3G, which used the GSM CODEC, is with 2G, the data was a continuous stream, but with 3G the exact same audio was transmitted in packets, though not over IP. Saving bandwidth was still a goal. Then, with 4g and lots of bandwidth available, the goals shifted from saving bandwidth to providing a better quality call. IP was now being used to carry the calls. Through all this, the goal remained the same, that is to carry a voice conversation. With the digital systems, the sync was carried along with the call data, even though different CODECs might have been used at different times.
Again, there is absolutely no need for blanking intervals with digital TV. I suspect Hugh's question arises because he is using an analog monitor, if I read his post correctly. Then analog framing, including blanking interval, has to be created.
BTW, it is possible to have analog video without blanking intervals. Back in the 70s, I used to maintain some video terminals where the sync was fed directly to the monitor, instead of being combined with the video.
My understanding was that on a CRT it needed a bit of blanking time in order to have time to change the magnetic field so the beam could start at the begining of the next line. Whether you used composite sync or seperate H and V sync didn't matter, it still needed the time. -- Len Sorensen

On 2023-06-20 15:06, Lennart Sorensen wrote:
My understanding was that on a CRT it needed a bit of blanking time in order to have time to change the magnetic field so the beam could start at the begining of the next line. Whether you used composite sync or seperate H and V sync didn't matter, it still needed the time.
Yep. Here's what Wikipedia says: "The VBI was originally needed because of the inductive inertia of the magnetic coils which deflect the electron beam vertically in a CRT <https://en.wikipedia.org/wiki/Cathode_ray_tube>; the magnetic field, and hence the position being drawn, cannot change instantly. Additionally, the speed of older circuits was limited. For horizontal deflection, there is also a pause between successive lines, to allow the beam to return from right to left, called the horizontal blanking interval <https://en.wikipedia.org/wiki/Horizontal_blanking_interval>. Modern CRT circuitry does not require such a long blanking interval, and thin panel displays <https://en.wikipedia.org/wiki/Thin_panel_display> require none, but the standards were established when the delay was needed (and to allow the continued use of older equipment). Blanking of a CRT may not be perfect due to equipment faults or brightness set very high; in this case a white retrace line shows on the screen, often alternating between fairly steep diagonals from right to left and less-steep diagonals back from left to right, starting in the lower right of the display." https://en.wikipedia.org/wiki/Vertical_blanking_interval#Vertical_blanking_i... It was also possible to display video on an oscilloscope, which used electrostatic deflection. The scope would just need a trigger pulse, which didn't have much width.

On Tue, 20 Jun 2023 at 10:40, James Knott via talk <talk@gtalug.org> wrote:
Is this documented anywhere? Sure the audio is sent over the cable, but why should there be such a thing as a blanking interval on a digital system?
https://en.wikipedia.org/wiki/Vertical_blanking_interval#Vertical_blanking_i... -- Scott

On Tue, Jun 20, 2023 at 11:24 AM Scott Allen via talk <talk@gtalug.org> wrote:
https://en.wikipedia.org/wiki/Vertical_blanking_interval#Vertical_blanking_i...
If one follows through that link, it seems we still have vblank because accessibility tools required it for timing and transmission of metadata. There's now the hacky solution of separate caption packets mixed in with the MPEG-2 video stream. There's never a clean upgrade path for anything.

| From: Scott Allen via talk <talk@gtalug.org> | On Tue, 20 Jun 2023 at 10:40, James Knott via talk <talk@gtalug.org> wrote: | > Is this documented anywhere? Sure the audio is sent over the cable, but | > why should there be such a thing as a blanking interval on a digital | > system? | | https://en.wikipedia.org/wiki/Vertical_blanking_interval#Vertical_blanking_i... There is also a horizontal blanking interval. <https://en.wikipedia.org/wiki/Horizontal_blanking_interval> On the Atari ST, in the first wave of consumer computers with a frame buffer, the number of colours on the screen was limited to 16. A paint program called Spectrum 512 could change the colour look-up table during any HBI and thus 512 different colours on the screen (just not on the same line). More extreme: the Atari 2600 had only 128 bytes of RAM! That doesn't seem enough to hold even a single line of the display. <https://en.wikipedia.org/wiki/Atari_2600_hardware>
participants (6)
-
Alvin Starr
-
D. Hugh Redelmeier
-
James Knott
-
Lennart Sorensen
-
Scott Allen
-
Stewart Russell