Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

I noticed CFS divides files into small blocks. Doesn this mean that as my file grows in size the probability that all of the blocks are present tends towards zero?

0
Posted

I noticed CFS divides files into small blocks. Doesn this mean that as my file grows in size the probability that all of the blocks are present tends towards zero?

0

The main claim in the Chord system is that “every block survives with high probability”. At first glance, this seems questionable: we prove that any particular block is lost with probability 1/N^2 (where N is the number of nodes in the system). But if there are much more than N^2 blocks, it seems that the odds of losing a block must be quite large. To resolve this problem, we need to look at how chord stores blocks. Chord picks a single “primary node” for a block, and stores the block at that node and the log N nodes immediately following it on the ring. The key observation is that no matter how many blocks there are, there are only N distinct “primary nodes” and, therefore, N distinct sets of 2 log N contiguous nodes on which a block can be stored. As long as one node in each of these contiguous groups stays alive, there will be it least one live copy of each particular block. Under a node failure probability of 1/2, the probability that all the nodes in a sequence of 2 log N of them

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123