Why does Mnet publish twice the data when I publish a file, part II?
Mnet uses somthing called erasure codes which allow us to break an encoded file into several parts, and then to recreate the file from a subset of those parts. (For example we might break an encoded file into 24 parts where you only need 12 of those parts to recreate the original file.) For a more complete overview look at the doc/filesystem_overview. Basically, this is some mathmatical magic, but it doubles the file size, so you have to upload twice as much data to get the added robustness.