They are different as you and I have both described, but when the sink device can support different streams, it has a significant advantage, because it automatically can support sinking frames from the broadcasting device and it removes the overhead of decompressing and then recompressing with practically assured data loss. It is yet another example of how patents, especially software patents, work against the original intent of the patent process.
The video cable does a similar trick with how it supports color. This is why S-Video was superior to composite video until component came along. S-Video split the intensity and color into two signals and then component split the color further into a blue difference and a red difference. If you only wanted black and white, you didn’t need to use the color signals and the image would degrade to a monochrome representation.
The composite video, with only one video signal wire, was similar to what was received over the antenna, with the broadcast signal separated from the carrier signal and the audio sub bands removed. It was the video signal with the color signal still combined. The progression from Antenna -> Composite -> S-Video -> Component -> DVI-I -> DVI-D -> HDMI -> Display Port has been an interesting one. The changes in the digital realm have been less about the image quality, the digital signal can either be read or not, and more about the bandwidth and how much data can be sent, aka resolution and framerate. Those first four transitions in particular had significant impact on the image quality.