Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video is the fastest-growing medium being used for both content consumption, marketing, and even internal comms. By 2022, online videos will make up more than 82% of all video traffic according to Cisco. 81% of businesses use video as a marketing tool — up from 63% over the last year.
A Facebook executive predicted that their platform will be all video and no text by 2021. 72% of customers would rather learn about a product or service by way of video. (HubSpot). 90% of information transmitted to the brain is visual, and visuals are processed 60,000X faster in the brain than text.
Video signal is basically any sequence of time varying images. A still image is a spatial
distribution of intensities that remain constant with time, whereas a time varying image has a
spatial intensity distribution that varies with time.
Presently most of the video systems are digital and use digital techniques in digital video. Today, high-definition video displays use digital technologies, such as DLP, LCD, LCOS (including variants SXRD and D-ILA), and plasma, which dominate the television landscape. Instead of “drawing” lines of picture information on the screen, these technologies form images with an array of pixels, and each frame is displayed in its entirety all at once; in other words, all pixels are activated simultaneously to form the complete image rather than forming the image line by line as CRTs do with scanning.
Advances in computer technology have allowed even inexpensive personal computers and smartphones to capture, store, edit and transmit digital video.
The demand for digital video is increasing in areas such as video teleconferencing, multimedia
authoring systems, education, and video-on-demand systems. The development of high-resolution video cameras with improved dynamic range and color gamuts, along with the introduction of high-dynamic-range digital intermediate data formats with improved color depth, has caused digital video technology to converge with film technology.
Video processing technology
In a digital video, the picture information is digitized both spatially and temporally and the
resultant pixel intensities are quantized. Video signal is treated as a series of images called frames. An illusion of continuous video is obtained by changing the frames in a faster manner which is generally termed as frame rate.
Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities.
The sensitivity of Human Visual System (HVS) varies according to the spatial frequency
of an image. In the digital representation of the image, the value of each pixel needs to be
quantized using some finite precision. In practice, 8 bits are used per luminance sample.
A video consists of a sequence of images, displayed in rapid succession, to give an illusion of continuous motion. If the time gap between successive frames is too large, the viewer
will observe jerky motion. The sensitivity of HVS drops off significantly at high frame rates. In
practice, most video formats use temporal sampling rates of 24 frames per second and above.
Digital video consists of video frames that are displayed at a prescribed frame rate. A
frame rate of 30 frames/sec is used in NTSC video. The frame format specifies the size of
individual frames in terms of pixels. The Common Intermediate Format (CIF) has 352 x 288
pixels, and the Quarter CIF (QCIF) format has 176 x 144 pixels. Each pixel is represented by three components: the luminance component Y, and the two chrominance components Cb and Cr.
A video processor may perform all or some combination of the following functions: upconversion, deinterlacing, frame rate conversion, noise reduction, artifact removal, lip sync (A/V synchronization) and edge enhancement.
Most video sources, including DVD, standard-definition TV, and 1080i high-definition TV, transmit interlaced images. Instead of transmitting each video frame in its entirety (what is called progressive scan), most video sources transmit only half of the image in each frame at any given time. This concept also applies to recording video images: video cameras and film-transfer devices record only half of the image in each frame at a time.
Thus, translating the interlaced video signal from DVD and 1080i sources into progressive format is required by all digital displays. This is the job of a video processor, and the process itself is called de-interlacing . Video processors are found in all digital displays as well as many DVD players and other source devices.
Random noise is an inherent problem with all recorded images; the result is often called picture grain. Not only does noise get introduced during post-production editing or the final stage of video compression, but it is also present at the source in the form of film grain or imaging-sensor noise. Noise-reduction algorithms can minimize the grain in a picture.
The simplest approach to noise reduction is to use a spatial filter that removes high-frequency data. In this approach, only a single frame is evaluated at any given time, and parts of the image that are one or two pixels in size are nearly eliminated.
A temporal filter takes advantage of the fact that noise is a random element of the image that changes over time. Instead of simply evaluating individual frames, a temporal noise filter evaluates several frames at once. By identifying the differences between two frames and then removing that data from the final image, visible noise can be reduced very effectively.
The process of compressing video files with minimal quality loss in order to showcase information utilizing low data is known as transcoding. Video transcoding online refers to converting any video file from one format to a compressed file to make sure that users can stream content without facing any buffering and at the best quality. By using this technique in video processing, companies can change the format of any video or can reformat these video files. Through transcoding, enterprises can eliminate the format and bitrate issues.
Video processing technology has revolutionized the world of multimedia with products
such as Digital Versatile Disk (DVD), the Digital Satellite System (DSS), high definition television
(HDTV), digital still and video cameras. The different areas of video processing includes (i) Video
Compression (ii) Video Indexing (iii) Video Segmentation (iv) Video tracking etc.
Video Processor Chips
In order to convert all incoming video signals to the native resolution of a particular fixed-pixel display, manufacturers must incorporate a video-processing chip inside the display. In addition to scaling the image to fit the native resolution, this video processor is normally designed to enhance the image and remove artifacts caused by the conversion and transmission of video.
Video processors IC are semiconductor devices used to process video images for a wide range of applications. These integrated circuits (ICs) are designed to display analog and/or digital signals while eliminating multi-path interference and adjacent-channel noise. Video processors IC may also provide high-definition (HD) decompression, pixel-based video analysis, adaptive pixel interpolation, and advanced field merging functions to eliminate problems caused by interlaced coding.
Some video processing chips can decode two or more simultaneous standard-definition (SD) signals. Others comply with digital television standards from organizations such as the Advanced Television Systems Committee (ATSC) and the European Digital Video Broadcasting (European DVB). Support for cathode ray tubes (CRTs) and flat panel devices may also be available with some video processors IC.
Video processors IC are available in a variety of integrated circuit (IC) package types. Dual in-line packages (DIP) can be made of ceramic (CIP) or plastic (PDIP). Quad flat packages (QFPs) contain a large number of fine, flexible, gull wing shaped leads. SC-70, one of the smallest available IC packages, is well-suited for applications where space is extremely limited. Small outline (SO) packages are available with 8, 14, or 20 pins. Transistor outline (TO) packages are commonly available. TO-92 is a single in-line package used for low power devices. TO-220 is suitable for high power, medium-current, and fast-switching products. TO-263 is the surface-mount version of the TO-220 package.
Other IC packages for video processors IC include shrink small outline package (SSOP), small outline integrated circuit (SOIC), small outline package (SOP), small outline J-lead (SOJ), discrete package (DPAK), and power package (PPAK). Packing methods for video processors IC consist of tape reel, rail, bulk pack, and tube technologies. The tape reel method packs components in a tape system by reeling specified lengths or quantities for shipping, handling, and configuration in industry-standard automated board-assembly equipment. Rail, another standard packing method for video processors IC, is typically used only in production environments. Bulk pack devices are distributed as individual parts, while tray components are shipped in trays. The tube or stick magazine method is used to feed video processors IC into automatic placement machines for through-hole or surface mounting.
Unfortunately, video-processing technology has not kept up with the picture quality of today’s larger and larger HD displays, which magnify the image defects that are caused by poor video processing.
The decoupling of software from hardware is a must have characteristic of next-generation video processing solutions, giving video service providers the agility they need to rapidly adapt to changing market dynamics. This strategic approach closely allies with the IT trend for both software-defined networking (SDN) and network functions virtualization (NFV). No longer tied to proprietary fixed-function (ASIC-based) hardware, operators can take advantage of Moore’s law to realize dramatic CAPEX / OPEX savings as the performance/price ratio of COTS hardware improves over time.
- Benefits of SDN solutions:
CAPEX / OPEX is reduced and business agility increases New chipset capabilities can be quickly leveraged via software stack New codecs, standards and protocols can be implemented without costly rip-and-replace Complements SDN / NFV strategy
2. Virtualized architecture
The ability to virtualize resources goes hand in hand with the decoupling of hardware and software. Next-generation video processing solutions have a virtualized architecture that enables VSPs to efficiently scale and flex their operations. With this strategic approach providers can cope with daily fluctuations in demand, and have the compute flexibility to rapidly roll out new services as user consumption habits and market dynamics change.
Upgrading to a cloud workflow for video processing, editing, encoding, storage, and delivery can help streaming service providers reduce management and maintenance costs, boost operational efficiency, enable scalability, and reinforce security.
COVID-19 caused an exponential rise in streaming viewing as shelter-in-place measures kept many at home bingeing their favorite content. Many content and service streaming providers had to learn the hard way, via poor quality of experience (QoE), that a traditional video infrastructure based on hardware appliances could not offer the scalability and flexibility of cloud technology.
To handle the massive shifts in streaming consumption due to COVID-19, providers must have reliable yet flexible solutions that scale. Cloud-native and cloud-neutral media processing and delivery services are incredibly flexible and scalable.
Video Processing Chip Market
The growing demand for high-quality videos, the increasing requirement for transcoding to provide videos to more end-users, and multi-device compatible video needs are among the key aspects fueling the growth of the video processing platform market. In addition, several mid-sized players operating in the video processing platform market which are providing solutions to SMEs with more advanced capabilities, which increases the competition in the market.
OTT platforms like Netflix and Amazon Prime are gaining more traction among the population across the globe, which is creating demand for video processing platforms in the market. In addition, the increasing demand for high-quality video content by the audience is motivating these platforms to introduce more video content on their platform and thus, contributing to the growth of the video processing platform market during the forecast period.
Key companies include Akamai Technologies, Inc., NVIDIA Corporation, Qumu Corporation, SeaChange International, Inc., Ateme S.A., MediaKind, JW Player, Inc., Kaltura, Inc., MediaMelon, Inc., and Imagine Communications, Inc. (Harris Broadcast).
Hisense Breakthrough in 8K AI Image Quality Chip
Hisense officially released the first self-developed 8K AI Perceptual Processor (Hi-View HV8107), further sharpening the competitive edge in image quality in the global competition of display technology. The Hi-View HV8107 chip not only has the function of supporting over 33 million pixels, but its partition control of 26,880 zones and powerful AI sensing ability can effectively improve the definition, contrast, and three-dimensional sense of TV pictures, allowing users to enjoy an immersive visual experience. It is also particularly amazing in the details of high-speed moving object trajectory capture, skin color improvement, and noise reduction.
References and Resources also include: