How fast is rdiff-backup? Can it be run on large data sets?
rdiff-backup can be limited by the CPU, disk IO, or available bandwidth, and the length of a session can be affected by the amount of data, how much the data changed, and how many files are present. That said, in the typical case the number/size of changed files is relatively small compared to that of unchanged files, and rdiff-backup is often either CPU or bandwidth bound, and takes time proportional to the total number of files. Initial mirrorings will usually be bandwidth or disk bound, and will take much longer than subsequent updates. To give one arbitrary data point, when I back up my personal HD locally (about 36GB, 530000 files, maybe 500 MB turnover, Athlon 2000, 7200 IDE disks, version 0.12.2) rdiff-backup takes about 15 minutes and is usually CPU bound.