Your method to copy 3TB between servers?

For a transfer protocol, I’d recommend rsync. You can do transfer the data in chunks and can safely resume it you need to stop the data flow or it get interrupted.

You can also setup rsync to run in multiple parallel stream. Take a look at this article: https://stackoverflow.com/questions/24058544/speed-up-rsync-with-simultaneous-concurrent-file-transfers

“Any compression you can get”.

even if you can get 1-2% compression then that’s a few minutes less (assuming a transfer time of X hours), and if it doesn’t have any negative impact on throughput (assuming your CPU is fast enough to keep up with the bandwidth of ~1 GB/sec of 10G ethernet) then why not?

If you’re on gig, it’s even more of a no-brainer to just turn it on and if it helps, it helps - if it doesn’t it has no bottleneck…

If you’re on 10G, i’d test to see if the CPU can keep up.

ZFS send/receive would be the best option if you are using ZFS.

TBH, it doesn’t sound like the transfer is that urgent? I’d just rsync it and probably even use --checksum just to be sure. I’d rather know the data was transferred intact than cut hours or even days off the transfer time. It’s not like you migrate servers that often. You can even keep using the server and then just run rsync again once the first attempt completes.

When speed is paramount, I have used netcat to transfer files, but that’s a gamble…

I have mixed feelings about using compression. Definitely only lzw if you do. Anything heavier will almost definitely bottleneck the cpu on 10Gb (in my experience at least). And I wouldn’t expect much compression given the data is already compressed.


rclone to Google Drive is usually the way to go for inexpensive mass backup. You’ll get an order of magnitude more bandwidth than that.

2 Likes

This!

2 Likes

Ok … anyway, OP is not really interested in it. :wink:

Totally offtop…
Personally, I usually avoid the compression when creating the archives if I do not have a very specific need to have compression. The CPU’s where I need to do it is very weak old chips from atom to i3 first generation. Usually, compression does not give me any significant benefits with the data that I have for this often adds unnecessary time. In my case, usually the bottleneck is cpu and not uplink so I usually do not bother with compression. :wink:

If time is not an issue I will just rsync the data. I prefer not to write the over the source data via compressing etc. Just read it off.

Time wise over SMB if its lots of small files like an os install or even massive MP3 libraries. It can be slow.

As long as the copy tool can pick up where it started from and do some CRC / size /date checks I am generally happy. This is ownly from presonal use cases. Its not my job to backup / data copy.

Rsync is a horrible idea, the algoritm to calculate parity is limited to a small size. It takes forever to calculate the diff. Using RSync for a single time transfer of data not currently in use is overkill. Also it is not a transfer protocol if you then use SSH it will be even slower. :slight_smile:

Compression is the same thing on a local LAN it only generates more heat :slight_smile:

Really you just need to transfer either a tar or a block depending on what you are moving (files or full drive). Opening a port and using NC will be the fastest option probably.

will that cable supply enough power for a 3.5" drive?
I have used a USB2 one for ssd and 2.5", but not tried the USB3 version…

[edit, opened the picture and saw the second SKU for 3.5" drives]

Dude, this thing is AmazeBALLS!!! I have had two for 10+ years and used it on every drive sata3 drive i have.

1 Like