SSD to SSD copy performance

i am kinda at a loss here, and need some suggestions. At work we have a spare win 10 workstation we also use as our back up “server” 17-4770 16GB ram (guess its really just being use as a NAS). My problem is i have noticed when our copying our work files from SSD to SSD or across the network (SSD-1Gb-SSD) we experience extremely slow copy speeds moving folder around 2GB to 5GB in size.
It will start and peak at 70MBs to 100MBs for a few secs then a abruptly drop to 200KBs or lower. This is on a fresh copy, no overwriting. i trimmed both drives before testing. I get almost the same results across the network (SSD-1Gb-SSD) see below.

The HDD results are slower again of course.

The only think I can think of is that there a lot of small files under 2MB, some as small as 2KB. I know that would effect a HDD but i didnt think it would be that bad on an SSD? if so is there a way to setup the drives for better copy performance for smaller files? Keeping in mind most back ups are across gigabit LAN. Thanks for your advice in advance.

small writes still affect performance on SSDs because the controller has to work much harder (many more operations per gb) than writing large files.

It’s not the same limitation as the head seek times in hard drives, but it’s sort of a firmware version of that. Each new file means a new cycle for the controller, means more overhead.

Reads are similar.

1 Like

Thank you for that article is it was a good read. i check and NCQ is enabled on both drives.
i didnt think it would drop performance so low for a 1000 small files at such an extended time especially on a 850evo.

Is there anyway to optimize the drive for this? or a file system or arrangement that would be better suited? or is this even the problem? both SSD activity sit around 1% when at the slower speeds. if it was running out of IO would it be at 100%?

This is likely due to caching on your network interface.

You can improve its performance by enabling offload or increasing cache size / buffers of your network interface.
also Disable “Digitally sign communications (always)” in group policies this will free up some bottlenecks.

bigger jumbo packets
maximize number of RSS queues (may help sometimes)

Thanks Cyklon i have doubled check the nic settings they are all enabled caches maxed out.

I dont think it is a network bottleneck, as it still does this copying between the two ssd drives within the server.

did you try to disable that group policy?
you need to edit registry setting to set caches you want on network int.

another things that may speed things up is creating partitions with bigger sector size. (bigger than typical)
if its not network, and boom you see at the beginning is cached part (basically data is copied into memory, and later queued into your drive controller)

The speeds you’ve shown are terrible, (if within same server) normal hdd’s perform better at times.

ex. normal 4TB HDD 7200RPM copying to SSD Evo 850

Thank @anon5205053
Digitally sign communications (always)” in group policies is disabled.
Your scenario is different im copying a lot tiny sized files. I get great results results when copying one large file.


I think its like @tkoham said small writes still affect performance on SSDs, i just didnt think it would be that bad.
I will try bigger sector size and come back with results.

ok i think i get your problem now.

try following:

you can find plenty of ntfs settings you could try adjust for your needs here:
HKEY_LOCAL_MACHINE\SYSTEM \CurrentControlSet\Control\FileSystem

turning off search indexing (not sure if that will help, but it may during copying small files)
enable cache in hardware options for your ssd/drives

here’s some more regs that may help

adding threads per queue may help with many small files. (but there’s more tweaks in that link, so try it out).


obviously restart, after you make changes in reg.

best thing todo would be to compress the small files into a singular file. Does need anything super advanced. a standard zip file would be overkill. This takes the 1000s of small files and makes it into a single file that is easier to deal with


I did think of this but the only downside then just have to unzip them when we need our program to access them. Guess its all about compromise.

I did some time testing on this. Results:
2.9GB folder with approx. random sized 10000 files 3000 smaller then 2KB over a 1Gb network

local machine (M.2 SSD) > Backup Server (RAID1 SSDs) Copy and paste = 3m26s
Folder Compressed in rar-fastest + local machine (M.2 SSD) > Backup Server (RAID1 SSDs) Copy and paste = 54s
Copy and paste directly into Compressed rar folder on Backup server = 50s
unziping form the server to the local machine = 1m41s

I assumed that the speed gain was because it was cache files into ram then moving the file over the network, but the interesting thing is winrah was only using 113MB of RAM with network utilization at 50% max.

In an effort to make this seamless question is would adding Drive Compression have similar results?