The purpose of this article is to provide an understanding of what is needed to successfully provide a suitable display for your product. The focus will be on mainstream, or relatively recent, graphical video display systems, with historical references to offer continuity when appropriate.
There seems to be quite a bit of confusion regarding the topic of providing a video display for a product, such as deciding if an embedded processor board’s video output is compatible with a chosen display.
Much of this confusion is surrounding the terminology; the intermixing of related, but distinct, terms such as DVI, HDMI versus DisplayPort, or WUXGA versus 1920 x 1200.This article is a guest post by Shawn Litingtun. Shawn is an extraordinary electronics designer. He has an insanely comprehensive understanding of many topics in electronics and has extensive knowledge on electrical certifications. He is also one of the experts inside the Hardware Academy available to help you with your product.
We will start with a brief description of how a progressive display works. For completeness, please note that there is another display mode called interlaced mode, but this will not be discussed here.
Sub-pixels, grayscales, and resolution
The description of how a progressive display works is contingent on understanding three basic terminologies, including sub-pixels, grayscales, and resolution.
1. Sub-pixels. A graphics display consists of a two-dimensional array of individually addressable pixels, each consisting of three sub-pixels. These sub-pixels can display one of three primary colors: Red, Green or Blue.
2. Grayscales. Each one of these primary colors can be displayed at various intensities, which are referred to as grayscales. Typically, each sub-pixel has 256 grayscales, thus requiring 8 bits to be properly represented.
3. Resolution. The resolution of the display is given by the number of pixels in each row times the number of rows. Thus, a 1920 x 1200 resolution display will have 1920 pixels per row and 1200 rows, for a total of 2,304,000 pixels (i.e. 1920 pixels per row x 1200 rows).
Forming an image on the display
An image is formed on the display by sequentially displaying each pixel. This starts at the pixel located in the top left-hand corner of the topmost row and progresses from left to right. After all the pixels in a row have been sent, the pixels in the next row (moving from top to bottom on the screen) are sent, again from left to right.
Note that all pixels that have been sent remain at whatever levels they were sent at. After all rows have been completed, a frame is said to have been sent, and the screen displays a full image.
The rate at which the frames are displayed on the screen is known as the refresh rate, or frame rate. By updating the frames at fast enough rates, fluid motion suitable for games and movies can be achieved without any display artifact.
Note that even if the displayed image is static, it is still being refreshed at the given frame rate. In this case, however, each subsequent frame is identical to the previous frame.
For most people, a 60Hz frame rate is sufficient rate for watching movies. A 72Hz, 120Hz, or even higher frame rate is preferred by gamers. Whatever the frame rate, though, it’s important to make sure that the display can accommodate the frame rate of the graphics adapter.
Sending the display data to the display panel
Since the display data must be sent to the display panel in the proper sequence, some form of synchronization is required. Figure 1 (below) shows how a single row, or line, of video is sent to the display at the display panel interface.
1. Each line starts at the rising edge of the Horizontal Sync signal, or H-Sync, and ends at its falling edge.
2. The actual display data is sampled after the rising edge of the Data Enable signal. The rising edge of the Data Enable signal follows the H-Sync rising edge after a period called the Horizontal Front Porch, or H Front Porch.
3. Data sampling ends at the falling edge of the Data Enable line, after which there is a period known as the Horizontal Back Porch, or H Back Porch, followed by the end of the H-Sync.
4. The time interval between the active portion of the Data Enable signal is known as the Blanking Interval. This is a carryover from the days when displays were Cathode Ray Tubes (CRT’s) and the deflection system needed time to retrace back to the beginning of the next line.
The frame timing is performed by the Vertical Sync, or V-Sync, in a similar fashion to how the H-Sync sends a video line. Following the V-Sync is a V Back Porch and a V Data Enable.
After a given number of H-Syncs, corresponding to the display vertical resolution, have been received, the vertical period ends, and a full frame has now been received. The V-sync frequency is thus the frame rate of the display.
Getting the video data to the display
Still assuming a 1920 x 1200 video resolution at a frame rate of 60Hz and 8 bits per color, which is not top of the line by any means, it is easy to calculate the transmission rate of this video signal:
1. Each pixel requires:
3 x 8 bits = 24 bits
2. The total number of pixels per frame is:
1920 pixels per row x 1200 frames = 2,304,000 pixels per frame
3. Bit rate of this video signal is:
2,304,000 pixels per frame x 24 bits per pixel x 60 frames per second =
3,317,760,000 bits per second, or about 3.32 Gbps
Higher resolution displays require substantially higher bit rates. Sending these signals reliably from the graphics chip to the display requires some considerations.
The simplest way of sending this signal to the display panel is to send it the way the panel expects it. However, this would require 24 signal lines (eight for the Red, Green and Blue sub-pixels) in addition to the other timing signals.
Some panels even require 48 signal lines. This is because these panels do two pixels at the same time to cut down on the bandwidth required on each signal line, hence relaxing on the rise and fall times of the pixel data.
Obviously, having a connecting cable with so many conductors between the graphics chip output and the display is not very practical. One way to reduce this number of signal lines is to send the data serially.
For example, each group of eight data lines can be sent serially over a single lane. The receiver end on the display side can then simply convert this serial stream back into eight lines to feed the display panel. Incidentally, the term generally used for such types of operation is called SerDes, for Serializer-Deserializer.
The issue here is that while it does reduce the number of signal lines, it comes at the expense of a higher data transmission rate. Instead of clocking each bit at the pixel rate, the dot clock rate is at least eight times faster in order to send eight pixels in the time it used to take to send just one.
This is where the various video interface standards come into play, such as DVI, HDMI, DisplayPort, and others.
Serial Video Transmission Considerations
Single-ended serial transmission
The easiest way to send data serially over a line is the single-ended mode, shown in Figure 2A (below). In this mode the receiver is simply a threshold detector.
Any signal level above the threshold is read as one logic level. If the signal level is below the threshold, it is read as the other logic level. This is how UART or SPI signals are sent, for example.
However, this can lead to trouble if the connecting cable is long because the signal wire can pick up noise (shown conceptually in Figure 2B). The noise is random, meaning it has no relationship with the signal being sent.
Now imagine that the noise is opposing the signal when it is at its high level. This might lower the signal level read by the receiver to an extent that it is lower than its threshold. The high level will then be interpreted as a low level.
Conversely, noise may be adding to the signal when it is at its low level, in which case the noise might be enough to cause the receiver to register this low level as a logic high.
One way to overcome this is to increase the amplitude of the signal level so that the effect of the noise source is swamped out. However, because digital circuits are moving towards lower operating voltages to decrease power consumption, this approach is not ideal.
A better approach is to use differential signaling. Here, both sides of the signal source are sent over a twisted pair and the receiver is a differential input threshold detector (Figure 3 below).
From the sender’s side, a logic 1 is sent as a signal where the D+ line is at a higher level than the D- line. Conversely, a logic 0 causes the D- line to be at a higher level than the D+ line. The receiver detects a logic 1 if the D+ minus D- exceeds the threshold, and a logic 0 if D- minus D+ exceeds the threshold value.
The trick to making use of this differential signaling method of data transmission is through twisted pairs of wires. Since the wires are twisted, and hence physically close together, any noise source is expected to equally affect both wires.
Thus, the noise will equally increase or decrease the absolute level of the signal pair. In other words, the D+ and D- line signal levels will both increase or decrease by the same amount and the difference between them stays the same.
Since the receiver only responds to the difference between the D+ and D- levels, the effect of the noise source is effectively canceled out.
Due to this noise cancellation, there is no need to have high voltage levels for the data transmission and low voltages can be used. This is essentially what the Low Voltage Differential Signaling, or LVDS, transmission standard is.
In this case, the voltage differential between D+ and D- is about 350mV. LVDS is still used today in some laptops and other applications.
TMDS (Transition Minimized Differential Signaling)
One of the issues with LVDS is that it simply describes the physical link aspects between sender and receiver. It does not define any encoding of the signal being sent.
Newer video interfaces use another method that is essentially an enhancement of LVDS signaling. This method is called Transition Minimized Differential Signaling, or TMDS.
In this method, an 8-bit data byte is encoded as 10 bits. Thus, it is known as 8b/10b encoding.
10 bits can represent 1024 combinations as opposed to 256 combinations for 8 bits. The extra combinations are used for a variety of purposes.
The main purpose of TMDS is to minimize the number of data level transitions while keeping a good DC balance. Minimizing the number of transitions reduces electromagnetic interference and reduces the bandwidth required to transmit the data. Having good DC balance helps set the proper level for the threshold detector that detects the 1’s and 0’s.
Consider this byte: 10101010. There is a transition from 0 to 1 or 1 to 0 on every other bit. That’s a lot of bit transitions. However, the number of 1’s and 0’s is the same; when averaged the DC value of this byte is 0. Thus, it has good DC balance.
If this byte were encoded in 10 bits as 0011001100, there are fewer data transitions. If this were a square wave, it would have lower frequency than the previous one; hence less bandwidth is required.
In an actual TMDS link, the 256, 8-bit combinations are spread over 460 combinations. Of the 1024 available 10-bit combinations, 558 are unused or forbidden.
The remaining six combinations are used to send control signals, including H- and V-Syncs and audio signals, in the case of High Definition Multimedia Interface (HDMI). This standard and others will be described in the next sections.
This section provides information to help you understand the various multimedia interfacing standards. I hope the background information provided in the previous sections will clarify these standards without having to provide specific details of each version of each standard commonly used today.
Video standards started when video signals were still analog, and monitors were CRT’s. Where appropriate, references to these early standards will be mentioned to provide some continuity, but the focus will mainly be on digital video.
A video standard defines more than just the transmission method of the video signals. It also defines the types of connectors used, the types of cable used, and the various resolutions achievable. Furthermore, video standards support additional features, such as carrying audio and providing content protection.
Given that there are many versions of each standard, the total number of combinations becomes very large indeed. The following standards will be covered here: DVI, HDMI, and DisplayPort.
DVI (Digital Visual Interface)
Digital Visual Interface, or DVI, is a standard that is still being used today, but it’s likely phasing out in favor of newer standards. There are three types of DVI cables: DVI-A, DVI-D and DVI-I.
DVI-A provides analog video signals that are compatible with analog monitors, typically CRT’s. DVI-D sends digital video using TMDS and is compatible with certain types of digital monitors such as LCD’s. DVI-I contains both analog and digital video signals for use with either analog or digital monitors using a small passive adapter at the monitor end. However, DVI-D is what most people consider to be DVI.
In a DVI link, the RGB video and pixel clocks are transmitted using TMDS. There is also an additional I2C communication channel, called a Display Data Channel (or DDC), of which several backward compatible revisions exist.
The DDC allows two-way communication between the graphics adapter and the monitor, exchanging Extended Display Identification Data (or EDID). This, in turn, allows the two sides to determine their capabilities.
Thus, the monitor can tell the graphics about its resolution, maximum refresh rates, color depth, or its unique identification number. Based on this information, the graphics adapter can then allow the user to choose among a set of display modes that are supported by both the display adapter and the monitor.
Single and dual link DVI-D
DVI-D has two flavors, single link and dual link. Simply, single link DVI has one set of RGB digital links. Dual link DVI has two Red, two Green and two Blue links, which effectively double the bandwidth and allow for higher resolution monitors.
Figure 4 (below) shows the pin arrangement of both single and dual links and a photograph of a dual-link plug. Table 1 shows standard resolutions and refresh rates.
Figure 4 – DVI connectors
|Some DVI standard resolution, common names and refresh rates|
|Common name||Resolution||Refresh rate (Hz)|
|SXGA||1280 × 1024||85|
|HDTV||1920 × 1080||60|
|UXGA||1600 × 1200||60|
|WUXGA||1920 × 1200||60|
|WQXGA||2560 × 1600||30|
|Common name||Resolution||Refresh rate (Hz)|
|QXGA||2048 × 1536||72|
|HDTV||1920 × 1080||120|
|WUXGA||1920 x 1200||120|
|WQXGA||2560 x 1600||60|
Table 1 – Some typical DVI resolutions
HDMI (High Definition Multimedia Interface)
High Definition Multimedia Interface, or HDMI, is in some ways an enhanced DVI. DVI is strictly used to transmit video from the sender, or graphics card, to the display. HDMI does this, but with added features.
As implied by the name, HDMI is a multimedia interface. Therefore, at this point it’s worthwhile to make the distinction between a multimedia display and a video monitor.
A typical video display is a computer monitor that simply displays a video image. On the other hand, a multimedia display is more like HDTV. It is an audiovisual system that has additional smart features.
In HDMI, the video stream is also transmitted as TMDS streams. Additionally, HDMI can read the display EDID and is compatible with DVI. A simple passive converter can be used to convert the video signals between DVI and HDMI. There is no sound, however, since DVI does not carry sound signals.
On top of the video, which is essentially the same as DVI, HDMI also has the following extra capabilities, some of which are only available in newer versions of the standard. These include multichannel audio, content protection, consumer electronic control, an audio return channel, and an HDMI Ethernet channel.
1. Multichannel audio
Digital audio data is sent along the same TMDS lanes that carry the video signals during the blanking periods when no video data is being sent.
2. Content protection
High Bandwidth Digital Content Protection, or HDCP, is used in some multimedia applications to ensure the receiver can display the video it receives. Basically, the DDC channel is used to exchange secret keys, which are then used at the receiver to decode the pre-encoded video sent by the source.
3. Consumer Electronic Control (CEC)
This is a separate channel that uses a one-wire transmission protocol to provide a way for the source to remotely control the receiver. CEC is available from HDMI 1.2A and up.
4. Audio Return Channel (ARC)
This is again a separate channel then allows the receiver to transmit audio either back to the source or to another piece of equipment, such as a sound bar.
For example, the source, say a blue ray player, connects to the sound system through HDMI with ARC. This, in turn, connects to the TV. The TV extracts the video and audio and then sends the audio back to the sound system without having to use a separate audio cable. ARC is only available with HDMI 1.4 and up.
5. HDMI Ethernet Channel (HEC)
Only available with HDMI 1.4 and up, an Ethernet channel is also included in the standard. This simplifies the wiring for systems that need wired Ethernet connections.
Common current HDMI cables
The common current HDMI cables are shown in Figure 5. They are similar except for their overall sizes. Table 2 shows the maximum resolution and number of audio channels available for major HDMI versions up to HDMI 2.0.
Figure 5 – Common HDMI cables
|Resolution and number of audio channels for HDMI version up to 2.0|
|HDMI Version||Maximum resolution||Number of audio channels|
|1.0||1600 x 1200 at 60Hz||8|
|1.1||1600 x 1200 at 60Hz||8|
|1.2||1600 x 1200 at 60Hz||8|
|1.3||2018 x 1536 at 75Hz||8|
|1.4||4096 x 2160 at 24Hz||8|
|2.0||4096 x 2180 at 60Hz||32|
Table 2 – Resolution and number of audio channels for HDMI version up to 2.0
HDMI 2.1 is the latest version of HDMI, finalized in November 2017. It adds significant changes over previous HDMI versions in almost all current aspects.
It also adds several additional features, including display compression for some of the higher resolutions. However, its adoption is not widespread currently.
|Resolution and number of audio channels for HDMI version up to 2.0|
|Name||Maximum resolution||Refresh rates|
|4K||3840 x 2160||48/50/60/100/120|
|5K||5120 x 2160||48/50/60/100/120|
|8K||7860 x 4320||48/50/60/100*/120*|
|10K||10240 x 4320||48*/50*/60*/100*/120*|
Table 3 – HDMI 2.1 display resolutions
* Requires display stream compression
DisplayPort, or DP, is a competing standard to HDMI. Both DP and HDMI have certain features that the other lacks. Therefore, for some applications, DP is arguably better, while for others HDMI is preferred.
DP, for example, does not have Ethernet or ARC. However, a single DP can drive multiple monitors. In addition, most DP sources support DP Dual Mode. Dual mode allows the DP source to convert to HDMI using a simple passive adapter. An HDMI source, however, cannot be converted to DP without the use of an active converter.
The main distinguishing feature of DP is that a DP link does not actually send raw video and audio data. Instead, it sends these as data packets in a fashion not unlike Ethernet or USB. As a result, it does not have a pixel clock.
The clock required to sample the data bits is embedded in the data stream itself. This is similar to the way that USB, for example, does not have a separate clock signal, and it’s what allows DP to drive multiple monitors.
For example, the source device sends the data stream to the first display, known as a branch device. This may then send it down the line to the next branch device that, in turn, can send it to the last display, or sink device. That way, a virtual channel is established between the source and the sink at the end of this daisy chain.
At the sink device end, the data packets are then reconstituted, generating the pixel data, pixel clock, and syncs, as well as the audio signals.
So, from the point of view of the end user, the source is sending AV signals and the sink is displaying the image and playing the sound, much like any of the other standards.
Figure 6 (below) shows full size and mini size DP plugs. It should be noted that there is also a DP Alt Mode that uses USB-C as the physical connection.
Figure 6 – DP plug types
Table 4 (below) shows the possible maximum resolutions achievable with the various versions of DP. Note that DP 2.1 standard is very recent (June 2019). As of now it’s just a standard with no wide support in commercially available hardware.
|DP Version||Common name||Maximum resolution|
|1.0||4K||3840 x 2160 at 30Hz|
|1.1||4K||3840 x 2160 at 30Hz|
|1.2||4K, 5K||3840 x 2160 at 60Hz, 5120 x 2160 at 30Hz|
|1.3||4K, 8K||3840 x 2160 at 60Hz, 7860 x 4320 at 30Hz|
|1.4||4K, 8K||3840 x 2160 at 120Hz, 7860 x 4320 at 60Hz|
|2.1||10K*, 16K*||10240 x 4320 at 120Hz, 15360 x 8460 at 60Hz|
Table 4 – Major DP versions and their supported resolutions
* Single display with compression
Selecting the Best Display Standard
Selecting the best display standard is not always an option due to already installed legacy hardware. However, some general comments are made in this section regarding selecting the best display system if a choice is available.
In general, for embedded applications that require a graphics display, a very high resolution is not an overwhelming requirement. Also, the distance between the video source and the display can be very short if they are both in the same enclosure.
In such cases, direct video interface using a short flex PCB is quite acceptable. Alternatively, most embedded microprocessor boards, such as Raspberry Pi’s and their clones and alternates, sport un-encoded HDMI video. The choice here is obvious.
For most consumer applications, HDMI is preferred simply because it is more prevalent. All modern HDTV’s, Blue Ray players, cable boxes or Android boxes have HDMI.
PC’s and high-end computing are where the lines get blurred. Most monitors have HDMI inputs to be able to interface with consumer AV equipment, but they also have DP interfaces for graphics cards.
High end graphics cards seem to be heading toward having multiple DisplayPort interfaces, with a few HDMI’s to connect to consumer AV equipment.
Much confusion exists surrounding the topic of providing a video display for a product. Specifically, much of this confusion stems from terminology that is often, but erroneously, used interchangeably.
Throughout this article I delineated these different terms and provided information on the relatively recent, or mainstream, graphic video display systems.
It is my hope that this information has provided you with a clear understanding of what’s needed to successfully provide the best video display for your product.If you read only one article about product development make it this one: Ultimate Guide – How to Develop a New Electronic Hardware Product in 2020.
Other content you may like:
- A Practical Guide to Using an Oscilloscope
- How to Add Audio Features to Your Embedded Product
- Datasheet Review: High-Performance STM32 Cortex-M4 Microcontroller
- How to Select the Best Display for Your Project
- Introduction to Analog to Digital Converters (ADC)