Saturday, September 13, 2008

Fundamentals of digital video coding

There are two primary color spaces used to represent digital video signal, which are RGB and YCbCr, where YCbCr represents color as brightness and two color difference signals. Y is the brightness (luma), Cb is blue minus luma ( B-Y), and Cr is read minus luma (R-Y). Short way is YCC for YCbCr. sRGB is an RGB color space created cooperatively by HP and MS.

"Gamut" is used to represent a set of possible colors within a color system.

Currently, most video signals that are generated by a TV camera are interlaced. In interlaced video signals, each frame consists of two fields, the top field and bottom field, which are 1/60 of a second apart. In the display of an interlaced frame, the top field is scanned first and the bottom field is scanned next. The top and bottom fields are composed of alternating lines of the interlaced frame. Progressive video does not consist of fields, only frames.

Common intermediate format (CIF) is a noninterlaced format. Its luminace resolution has 352x288 pixels per frames at 30frame/second and the chrominace has half the luminance resolution in both vertical and horizontal dimensions (4:2:0?)

JPEG became an international standard in 1992, is a DCT-based coding algorithm.

JPEG2000 is considered using the wavelet transform as its core technique. The wavelet transform can provide not only excellent coding efficiency but also wonderful spatial and quality scalable functionality.

MPEG-1 completed first in 1991, "coding of moving picture and associated audio", target application is digital storage media (CD-ROM) at bit rates up to 1.5Mbits/s

MPEG-2 "generic coding of moving pictures and associated audio". Digital TV and HDTV application at bit rates between 2 and 30 Mbits/s.

MPEG-4 part-2 started from 1993, approved in 1999

MPEG-4 part 10 Advanced video coding or H.264, higher coding efficiency, which is almost twice better than MPEG-2.

No comments:

Post a Comment