Cache terminology, what does it mean?
Why cache improves performance —————————— Today’s microprocessors (“uPs”) need a faster memory than can be made with economical DRAMs. So we provide a fast SRAM buffer between the DRAM and the uP. The most popular way to set it up is by constructing a “direct mapped cache,” which is the only setup I’ll describe here. Generic motherboard cache architecture ————————————– The direct mapped cache has three big features: 1. a “data store” made with fast SRAMs, 2 a “tag store” made with even faster SRAMs, and 3. a comparator. The data store is the chunk of RAM you see in the motherboard price lists. It holds “blocks” or “lines” of data recently used by the CPU. Lines are almost always 16 bytes. The address feeding the cache is simply the least significant part of the address feeding main memory. Each memory location can be cached in only one location in the data store. There are two “policies” for managing the data store. Under the “write-back” (o