High-Speed Video Bus Battle in Portable Designs
The parallel bus ruled the world in the past. Most of todayís video architectures still use low serialization density and keep the pixel clock in parallel with the data. Thatís not going to work any more.
During the 80s, I vividly remember my friend sketching his first computer graphic image of a Marlboro cigarette box on a Commodore 64 screen. Using his DOS operating system, he programmed a software routine that would output the color value and address of every pixel and pixel fields onto the CRT screen. It took hours to finish the red, black and white image.
Amazing how far we’ve come! Today’s images are designed by graphical artists or laymen with no understanding of how to push a pixel to its destination. Display devices are not only known for great electronics, but also for their appealing aesthetic design and portability. Digital display technology has brought colorful images everywhere, currently pushing up to mind-boggling 33 billion bits/sec through the video pipe in the living room. I for one am glad that the days of cigarette smoke and DOS images are over!
Video in Mobile Products – A Brief History of How We Got Here
The new world of personal computing became possible through the advances in digital processing. This pushed the need for fat data transfer pipelines. When CRT screens dominated projection technology, video data was mainly encoded into analog signals and transported in an impedance-controlled environment fairly well. Analog displays are no friend of portable electronics. The true leap in bringing video into portable designs originated with the availability of liquid crystal displays. The analog video interface turned digital. For small screen resolutions, the CPU interface is the most common solution. It is simply a parallel data bus from the video source to the display and can be driven like a memory bus. A local frame buffer inside the display allows the use of fairly slow microprocessors.
The next generation of display technology brought color to the display, which demanded an even faster data pipe. Combined with shrinking mobile phone designs, the display became an adaptable and attractive design component. Fewer and faster wires were needed to connect the processor to a swivel display. At this point several companies introduced data serialization concepts to overcome the bottleneck. Among them are National Semiconductor’s MPL technology and Fairchild’s μSerdes technology. The basic concept is a discrete transmitter (serializer) located near the graphic source. A discrete receiver (de-serializer) is located near the display panel and often mounted directly on the flexible printed circuit (FPC) cable. The FPC connects the main computing board with the display panel. Target resolution for such a system is up to QVGA with color resolution not exceeding 16-bit/pixel.
With advanced display technology, higher display resolution with more vivid colors became possible. Display resolution of two to six times that of QVGA, and up to 24-bits/pixel color resolution required again an increase in data throughput. The local display frame buffer at this point became so large and costly that the CPU interface was replaced with a RGB video interface already known in notebook PCs. However, standby and in-operation time of a mobile phone compared to a notebook needs to be much longer. This demands more power-efficient solutions than notebook technology provides.
Figure 1: Example of a smartphone using a discrete serializer (TX)
and de-serializer (RX) to reduce the amount of cabling across the hinge
from 28 signals to only two differential data pairs.
To eliminate this bottleneck, Texas Instruments introduced FlatLink3G technology into its OMAP application processor platform. Stand-alone transmitter and receiver ICs were also released. The technology was developed with the support of various display driver and panel design houses. Other companies approached this problem in similar ways, such as Qualcomm with mobile display digital interface (MDDI) technology. Video Electronics Standards Association (VESA) later adopted MDDI. Maxim decided to pursue a stand-alone bridge solution reducing the number of cabling to one wire only with embedding clock into data. Existing serializer solutions with CPU interface started to also offer the RGB video interface.
Ultimately, mobile equipment designers want to see a path that provides for integrating the transmitter inside the graphic engine and the receiver inside the display (Figure 1).
Only a few solutions such as MDDI and FlatLink3G truly enable this integration. Several competing concepts use complicated analog design techniques (such as MPL). While they can reduce the power consumption, integration is very difficult into standard CMOS transmitter technology or high-voltage display driver technology.
With all these available technologies a new problem came into existence: how can a system designer select components from different manufacturers and interconnect them with each other? A solution that would bring together all these technologies was needed. To resolve this issue, the Mobile Industry Processor Interface (MIPI) alliance, including most leading companies in the mobile industry, developed the display serial interface (DSI). This technology interconnects the graphic engine to the display within mobile products. The technology integrates the advantages of both CPU and RGB video interface. DSI is very powerful through the use of data packetization. DSI facilitates integrating the transmitter inside the application processor and the DSI receiver into the display driver. However, discrete bridge solutions with DSI are only suboptimal. A packet engine is costly and consumes additional power. Proprietary alternatives such as FlatLink3G have a competitive advantage here. Additionally, no software is required.
IT Products and Video
Figure 2: Pixel data flow between a camera sensor, an application
processor and the display
Most camera and display interfaces transfer pixel data in the RGB format. Figure 2 shows the pixel data flow from a camera sensor to an application processor and from the processor to the display. The camera sensor outputs 10 bits of raw data per pixel. Each pixel contains the information of one color component only. The video engine recovers the true 30-bit RGB color value of the pixel digitally using color information of adjacent pixels. In contrast, the display output path transfers all three color components for each pixel in parallel. The 24-bit output value for each pixel represents 8-bit data for R, G and B color components.
Processor and ASIC vendors are constantly challenged to control the device pin-count. The pin reduction due to serial video interconnects is very attractive. Intel first replaced the GPU output parallel bus with DVO outputs, reducing the bus width by nearly 50 percent. Later Intel offered SDVO, a true serial interface with four differential lines only.
One significant bottleneck for the graphics industry is the display panel input. LVDS serializers with 7:1 data compression ratio are found today in nearly every large graphic panel (refer to TI’s FlatLink or National Semiconductor’s PanelLink). Notebook display panels come mainly with a color resolution of 18-bits per pixel. The data and additional three synchronization signals are transferred to the panel using three differential data lines and one clock line. Monitor and TV panels require 24-bits, 30-bits, and even up to 48-bits of color resolution per pixel. The same 7:1 LVDS serialization technology is often used. The number of LVDS channels increases from four differential pairs to five, six, or seven accordingly.
Display panels come in different color resolution (16-bits vs. 48-bits) as well as in different screen resolution (QVGA vs. FHD). Increased panel resolution translates into a faster pixel clock rate and demands more data throughput. LVDS serializers reach the maximum transferable data rate at around 135 MHz pixel clock speed. To allow faster clock rates, the pixel transfer can be split into even and odd pixel data and is transferred through two parallel LVDS links. The largest TVs today use up to 32 differential signal pairs, making a pixel clock frequency of up to 540 MHz possible. Dealing with such a large number of LVDS signals makes dealing with EMI very challenging. While clearly limited as a technology, the 7:1 LVDS serialization architecture is still extremely popular. The technology is available from multiple sources.
While 7:1 LVDS SERDES is used as an internal interface, digital visual interface (DVI) is the external box-to-box counterpart. Prior to serialization the data is encoded. The encoding scheme is transition minimized differential signaling (TMDS), a technology developed by Silicon Image. TMDS provides AC-balanced signaling and reduces EMI of data lines at the cost of increased clock rates. A third similar technology is the high-definition media interface (HDMI). HDMI expanded on the DVI concept by adding audio and data encryption to the TMDS signals. LVDS serialization, DVI and HDMI have one major design drawback: the pixel clock signal is transmitted in parallel to the data. The receiver uses this clock signal for data recovery (DLL). This makes the setup and hold time budget of the link very critical and limits the maximum data rate even for a receiver with built-in deskew.
Serializer technology with the clock signal embedded into the data allows for the highest data transfer rate. THine’s V-by-One is a technology example here. Proprietary solutions, however, limit the adoption of the technology. DisplayPort (DP) has emerged as the preferred display interconnect for the future in the PC industry. DP is an open technology combining historic lessons learned. The technology is highly scalable and utilizes 8B10B coding with data scrambling, SSC and inter-lane deskewing in addition to embedded clocking. DP offers a low-power high-throughput video interface with low EMI. DP adoption started last year with direct drive monitors and is currently starting to replace the LVDS display connection within the notebook.
In 2007 the consumer industry was caught by surprise with the success of the iPhone and by sky-rocking UltraMobilePC sales records. Mobile processors are used to back up PC engines for their low power. Display panel makers currently develop solutions to reduce power through the use of dynamic backlighting and by advancing OLED displays. Mobile processors driving large and colorful notebook panels are becoming reality. This leaves the mobile processor designer with the difficult decision of choosing the right video interface(s). Demand for driving HDMI from a mobile phone is growing. The DSI, HDMI, LVDS SERDES and DP world start overlapping.
We now also see demand for video transport through optical and wireless links. The picture frame desires a wireless connection, and so does the super-flat LCD TV on the wall. Pushing compressed video through existing infrastructure using MPEG decoding is challenging, especially when film and video content is displayed on large TV screens. The parallel bus ruled the world in the past. Most of today’s video architectures still use low serialization density and keep the pixel clock in parallel with the data. We now start seeing a transition to fully optimized serial connections with clock embedded into the data. The amount of cabling is reduced further by using adaptive receiver equalization and transmit pre-emphasis techniques.
We should not ignore the trend to full HD screens in the TV industry. People will want to share content from their personal devices on large screens with their friends. We are ill advised to believe 18-bits of color and QVGA resolution is “good enough” for portable, lower power products. We miss a trend if we ignore the late growth in 3D cinemas or the commercial availability of 3D DLP TVs. DreamWorks, for example, has plans to release all new films in 3D starting in 2009. 3D imaging requires twice the data throughput and a lot more advanced signal processing. Holographic techniques give us the chance to build image-projection eyeglasses that are lightweight and cool looking. Pocket image projection is becoming real right now! First demos prove up to 30-inch projection can be achieved powered by notebook batteries. Screen sizes this large demand more than VGA resolution.
Figure 3 gives a side-by-side comparison of popular technologies found in a wide variety of portable products. The list of technologies is not complete and more technologies may be available, but are not mentioned here. The author would like to apologize for any unintended error or misrepresentation of data in the technology comparison.
Figure 3: A side-by-side comparison of popular technologies found in a wide variety of portable products
Many display interface technologies coexist today. Picking the right technology depends on specific product concerns. Most often the graphic engine output or display panel input will dictate the choice. If a bridge solution becomes necessary, look for a simple technology optimized for your application. Proprietary solutions for mobile processor are a great choice here due to low power, cost and design complexity. For product external interfaces consider HDMI due to its wide installation base and put DisplayPort on your “watch-for” list. DP technology is superior to HDMI but lacks the installation base today. Inside a box, DP is the best choice when throughput, low pin count and EMI really matter.
About the Author
Falk Alicke is a Senior System Engineer with the Interface products group at Texas Instruments (www.ti.com/interface) where he is responsible for mobile camera and display SERDES, DisplayPort LCD timing controller and video link product definition. Alicke received his MSEE in telecommunications at the Technical University of Applied Sciences (HTWK) in Leipzig, Germany.
Texas Instruments Inc.
This article originally appeared in the June, 2008 issue of Portable Design. Reprinted with permission.