So back in the day when sata still sucked (e.g sata 150) I used to be a scsi guy , any of my own workstations usually comprised of 4 or more 36 or 72gb ultra320 wide 68pin scsi plugged into a good ol Adaptec scsi card …Linux software raid …all was great . Later years I’ve like everyone else been running data albeit enterprise grade drives unusually run them on a sas controller card with those minisas to SATA cables .
For my current amd threadripper pro 5975wx build for.the os etc I’ll be running nvme raids . The lian li 011xl case has 4 hotswap bays for 3.5" drives . My thoughts were along using 4 x 18tb enterprise Satas.in raid6.
Now I’m thinking why not stick in a SAS card and run 18tb Sas drives … ?
Traditionally the SAS just like scsi vs data, had far.higher MTBF stats etc vs sata
A quick look at.the Seagate EXOs 18 datasheets I see now compelling reason why SAs?
Specs are.the same? Talk me into sas or out of SAs… And go…
Let’s just say for this excercise its mission critical data …
The very light research I’ve.done so far shows:
Althought SAs is full duplex and sata is half duplex both drives show (at least.with Seagate x18) same throughput…
Same MTBF
So again , in 2022 with current tech , why SAs over sata ?
SAS drives are for servers so you can put a lot of them in one case and they won’t get harmonics off each other and fail.
I use SAS because the 2nd hand ones from ebay are very cheap. £3 per TB.
Obviously if you’re buying the biggest drives you will need to buy new drives which are more expensive than SATA.
SAS signal voltages are higher, more than double SATA, greatly increasing it’s ability to deal with marginal quality cables and EMI.
SAS drives also enjoy an expanded command set which has real world benefits if data read issues are encountered.
This is less of a hard and fast rule now a days but SAS drives used to have components that were binned to a higher quality than what was put in SATA drives (platters, heads, vc amps).
I have a lian li d600 server case that has 30 drives . It’s 10 x 1tb, 10 x 2tb and 10 x 6tb … This is my storage server build from around 6years ago .
The build featuring the lian li 011 xl case is going to be my flagship workstation case with full custom loop watercooling with 3 x 360mm rads …CPU as mentioned 5975wx…
There is a raid card which I will run 4 x 2tb nvme in raid0 and on motherboard the 2 x nvme will be raid 0 with 2 x 2tb.
The storage as in the 4 x 3.5" drives will mostly be to take images/snapshots of the 8tb etc … the price difference between 18tb and 20tb makes 20tb poor value for money .
So how do you build a large data volume with safety built in ? Def not raid5 for a large volume …so logical …it expensive conclusion is 4x 18. 36tb usable with 2 x drive failover…
Maaaybe with a case mod.i can somehow squeeze a 5th drive in (won’t be hotswap) this will bring useable space to 54tb… Rebuild.time “should” be under 72hrs I’m guessing …
What do you use in your old storage server? SAS or SATA?
My newish threadripper pro 3975wx machine (in Fractal Meshify 2 XL) has Microchip/Adaptec HBA 1200 32i SAS/SATA/NVME adapter card, but I mostly use SATA for spinning rust (I have one SAS harddisk)
performance difference in ceph for Exos 18tb SAS and SATA drive.
The old storage box is all sas but I use SAs expander and Sas card with several sas-hd to 4x Sata cables .
So that benchmark data is very useful,not familiar with your specific system but what the takeaway there is although Seagate says the performance “should” be the same , in your case the sata is ±194mb/s and then the sas is 246mb/s. On its own not much BUT consider this performance difference during a raid rebuild. It’s substantial .
I’m not 100% sure about the benchmark data.
Both disk are in use in a ceph cluster, so there is already data on the disk.
The difference could be that it writes data onto different parts of the disk, depending where there is free space on the disks. I’m also quite new to ceph so I don’t know what tuning parameters I could have missed. Both drives operate with write cache disabled.
In your case I probably would go with SATA disks if you are planning to add them to the new case, and don’t bother with SAS since you only have 4 disk slots, and you can just use the sata ports from the motherboard, for cost reason.
Or you can use SAS disks and start replacing the smaller disks on your old storage servers with bigger drives.
In a data center environment, I have not seen an appreciable difference between SATA and SAS drives. Installed lots of both SATA and SAS 6TB HGST drives in Dell R730 servers (H730 controller), and saw no performance difference in initial benchmarks, and over several years, no apparent trend of either drive type dying before the other.
Your guess as to why that was the case is as good as mine. I still don’t hesitate to spec SAS drives because the price difference is minimal.
Often I see new old stock (NOS) SAS drives much cheaper than SATA, but that would only help if you were looking for lower capacity drives.
Y’all have to admit though it’s interesting that both exist as the difference as we’ve pointed out is minimal at best …
In the old days you’d have a 500gb sata disk typically 7200rpm, then you’d have 36,72 or 142gb scsi ultra320s at 10000rpm or 15000rpm.
Seek times were off the charts better for the scsi, disk thruput ofcourse so .
But this was at a time when sata was topping out at like 120mb/s.or.less
Yeah I’ll probably opt for sata enterprise grade disk’s based on the fact that I can easily sell Em on OR stick em in my storage server …
Money’s tight ,in the process of buying the 5975x CPU which is technically the last piece I need to start the build-off.
I do have some 12gb/s sas controllers here so maybe I get a few SAs drives smaller in size to start off with …
18tb ENT Satas are sitting at around 300odd usd each so a fair investment at around 1200usd for 4.
Yeah I guess it must have had something to do with where on the disk stuff got written or what is going on in the background.
When I reran the test today the SATA disk was faster.