This could be an obvious question but why don't we use ethernet as a storage interface in SSD's rather than SATA which can create a bottleneck?
1Gbps Ethernet is 1/6th the speed of sata, Ethernet controllers don't support Sata commands, faster than 1Gbps Ethernet controllers are ludicrously expensive compared to Sata controllers, etc, etc, etc.
SATA 3.2 is only 1969 Mbps while cat 7a is 100 Gbps? the price hike does, however, make sense.
A Sata 3 controller can be hard for >$10. A 100Gbps Ethernet controller is thousands. The cost isn't in cables, its in controllers. Let alone that even that 100Gbps ethernet controller is not capable of executing any of the commands required to operate a drive. There are 7 pins on a sata connector, there are 8 in ethernet. Ethernet controllers don't support the necessary sata commands used to communicate with drives either.
Thank you for enlightening me.
Erm. There were 7 (seven) last time I checked. Three of which are GND. =)
Although comparing SATA and CSMA/CD is like comparing apples to oranges. Why would anyone use a carrier-based protocol with collision detection in the environment where there are initiator and target devices and data flows sequentially?
Your right, I wonder where I was remembering that 30 pin number from. Thought it might have been power but nope that's 15 haha. Maybe from apple's old connector or something lmao. Edited my posts with corrections :3
There is iscsi. But having a full TCP/IP stack on harddrive, or the complexities involved to connect one to the netwerk: it's horribly difficult, raising the price of hard-disk significantly. Sata is in that regard "simple".
If you do connect a hard-drive on the network you face the following problems:
- IP configuration (you could use L2 and just use MAC addresses).
- Network introduces extra latency. For normal HDD's this was an acceptable increase, SSD's not so much.
- Limit in (cheap) bandwidth, a large problem with RAID configurations.
- Configuring a device with no screen and no keyboard for secure access on the network.
- Updating the software/firmware of the disk (frequently).
- Cost of HDD's would increase (increased RAM and CPU power requirements) to perform said above tasks.
- General "internet of things" problems.
In short, if you want your disk on the network use a NAS/SAN.
It is used in some oddball/uncommon situations for SANs:
HyperSCSI and AoE are SCSI and ATA over Ethernet, respectively.
The point of these protcols is low cost, low latency SANs.
This is not like iSCSI which is SCSI over TCP/IP...
This is SCSI or ATA over Ethernet Frames meaning that it is unroutable.
Like I said above, these are not popular standards.
I think a lot of people don't understand the difference between a NAS and a SAN.
For what people usually want - they want a NAS.
A NAS gives you a place where you can put files.
A NAS is connected to via SMB/CIFS/Windows shares, NFS, etc.
A SAN is a block level device - like a hard drive attached via network instead of by SATA/SAS/M.2.
As such, you have to format it with a filesystem (if it doesn't have one already) after you have connected to it.
You connect via iSCSI, FCOE, HyperSCSI, AoE, etc, then mount your filesystem.
You can do a bunch of strange/goofy things you probably shouldn't do with a SAN.
You're less likely to get into trouble with a NAS.
Hardware wise it would have todo with signal strenght.
eth is created for long range signalling, which sata is created for locally(very locally) storing data.
logistically it really boils down to price to performance, sata is alot cheaper to make, and is an alot more efficient way of storing data than ethernet, but you do not wanna send your sata signal through a 10-100m long sata cable, that would wield awfull result due to the resistivity of the cable.
where as ethernet can be used to extend signals for f.x. your monitor by applying a box up to a 100+m.
iScsi does exactly what you are talking about and runs a scsi protocol over an IP network to a SAN. The NAS/SAN presents the storage as a block level device that your computer thinks is a locally attached drive. There is also Fibre Channel over TCPIP as well. Neither are well suited to home use being better suited to an enterprise environment due to the costs involved.
As mentioned previously, 1Gbs network is limited to 100MB/s. You access an SSD connected to Sata III at 500MB/s which makes the ip network connection a bit slow, 10Gbps is more appropriate. You also need a NAS/SAN to support the drive at the other end by which time you might as well just set up some sort of file server and share the cenrratlized storage to all users on the lan
Better yet why not USB 3.1?
so as to not write a scientific article into the thread:
It's mostly down to usb having extreme latency in comparison.
I don't actually think there's anything else that would stop you from just using usb as your main boot device.
if you want a notable example of using usb for main system storage: playstation 4
note: everyone I've talked to agrees it has to be the reason it takes roughly 1 hour to download and write 10GB of data
we might be wrong though
All my crashes have been USB related. Remove the USB mas storage devices and system goes hundreds of days without reboots.
USB sucks.
I don't know about ethernet instead of SATA, but Ceph and WDLabs did something similar with the WDLabs Converged Microserver He8: They basically soldered an entire microcomputer onto the drive’s PCB, The interface is dual 1 GbE SGMII ports with the ability to reach 2.5 GbE in a compatible chassis. The physical connector is identical to existing SAS/SATA devices,
Article:
SATA 3.2 is 16Gbit/s, or 1969MByte/s with the encoding overhead.
CAT7 supports 10Gbit Ethernet (approx 1190MByte/s).
In theory CAT7A could support 100Gbit on copper, but AFAIK the 100GbE standard uses fibre optic or twin-axial copper cable (not twisted pairs).
So, SATA is cheaper, faster, and has less protocol overhead (ie lower latency).
Times change, zombie thread but you show up high on google when you search this so I think reviving it has merit.
Ethernet SSD’s are now a thing. Called EBOF, ethernet bunch of flash. Multiple vendors showing good adoption rates this will likely become commonplace in datacenters before trickling down.
Benefits are myriad but include power savings, flexible allocation, higher utilization, limited impact on single point failures.
Flash is dense, when you can spend thousands of dollars on one drive the marginal cost to add a 25gbps ethernet interface to becomes negligible. Most are using copper SFP28 direct attached custom form factor switches with commonplace guts. This ethernet switch has two 100gbps qsfp28 ports for uplink to the wider network. Other versions with 400 and 800 gbps uplinks are roadmaped and drives with sfp56 are expected.
I have the same question
Recently observed this
IT IS!