Badly-unmounted filesystems can blow ntfsclone sky-high.Ī corrupt filesystem cloned sector-by-sector is no worse than the original. Keep in mind that dd is dumb for a reason: the simpler it is, the fewer ways it can screw up.Ĭomplex partitioning schemes (consider a dual-boot hard drive that additionally uses LVM for its Linux system) will start pulling bugs out of the woodwork in programs like Clonezilla. If not, then use the smaller BS of the two.įor those that end up here via Google, even if this discussion is a bit old.
If it were me, I'd do a few experiments timing a copy/clone using a BS of 128K and again using (say) 16M. The reason for the lower bound being 128K to 32M is - it depends on your OS, hardware, and so on. My rough approximation is that a block size in the range about 128K to 32M is probably going to give performance such that the overheads are small compared to the plain I/O, and going larger won't make a lot of difference. Of course in this there is a law of diminishing returns. Making the BS small means that the I/O overhead as a proportion of total activity goes up. If you have lots of RAM, then making the BS large (but entirely contained in RAM) means that the I/O sub-system is utilised as much as possible by doing massively large reads and writes - exploiting the RAM. Dd will happily copy using the BS of whatever you want, and will copy a partial block (at the end).īasically, the block size (bs) parameter seems to set the amount of memory thats used to read in a lump from one disk before trying to write that lump to the other.