What is the difference between compression rate and compression ratio?
Historically, there are two main types of applications of data compression: transmission and storage. An example of the former is speech compression for real-time transmission over digital cellular networks. An example of the latter is file compression (e.g. Drivespace). The term “compression rate” comes from the transmission camp, while “compression ratio” comes from the storage camp. Compression rate is the rate of the compressed data (which we imagined to be transmitted in “real-time”). Typically, it is in units of bits/sample, bits/character, bits/pixels, or bits/second. Compression ratio is the ratio of the size or rate of the original data to the size or rate of the compressed data. For example, if a gray-scale image is originally represented by 8 bits/pixel (bpp) and it is compressed to 2 bpp, we say that the compression ratio is 4-to-1. Sometimes, it is said that the compression ratio is 75%. Compression rate is an absolute term, while compression ratio is a relative term
Related Questions
- What is the difference between software and hardware compression? Does hardware compression provide a better compression ratio than software compression?
- What is the difference in expense ratio between a floating rate fund and a diversified income fund?
- Are FRAME RATE, COMPRESSION RATIO, RESOLUTION setting appropriate?