Data compression

Together with increased performance in computer systems and digital networks, improved data compression techniques are requisites of modern media technology. Data compression is the science of removing redundant information from a message to minimise the number of bits that need to be stored on a disk or transferred through a network. In a world with unlimited and free bandwidth, data compression would not make sense. In a world with stringent physical and economical constraints - which happens to be the case where we live - data compression makes a lot of sense.

One early example of a data compression scheme is the digital communication system named after its developer, Samuel Morse. By assigning shorter representations (or keys) to the most frequently used characters (e.g. 'e'), telegraphers could send messages faster than if all characters were represented by same-length keys (which is the case in ASCII). While the basic idea has continued to be the foundation for many compression schemes, the search for optimal key assignment algorithms has improved compression ratios. A text file coded in ASCII will typically shrink to half the size being compressed with modern algorithms.

One vital feature of compression schemes for text is that they are non-lossy, i.e. no information is lost in the compression-decompression process. When compressing images, the virtue of non-lossiness is not that important. Certain characteristics of the human visual system can be exploited to achieve higher compression ratios. E.g., our eyes are less sensitive to the colour blue than they are to green. We can take advantage of this when compressing a colour image by throwing away some of the blue information that our eyes probably would not detect.

Recently, some interesting lossy compression schemes have been introduced:

Text goes here