I noticed CFS divides files into small blocks. Doesn this mean that as my file grows in size the probability that all of the blocks are present tends towards zero?
The main claim in the Chord system is that “every block survives with high probability”. At first glance, this seems questionable: we prove that any particular block is lost with probability 1/N^2 (where N is the number of nodes in the system). But if there are much more than N^2 blocks, it seems that the odds of losing a block must be quite large. To resolve this problem, we need to look at how chord stores blocks. Chord picks a single “primary node” for a block, and stores the block at that node and the log N nodes immediately following it on the ring. The key observation is that no matter how many blocks there are, there are only N distinct “primary nodes” and, therefore, N distinct sets of 2 log N contiguous nodes on which a block can be stored. As long as one node in each of these contiguous groups stays alive, there will be it least one live copy of each particular block. Under a node failure probability of 1/2, the probability that all the nodes in a sequence of 2 log N of them