General discussion of NVMe Performance

Solaris is free.

You can literally download it here:
https://www.oracle.com/solaris/solaris11/downloads/solaris-downloads.html

re: “something very different than Linux”
Yes? So?

ZFS (on Linux) is very different than ext3, ext4, or XFS, yet people are jumping onboard the ZFS on Linux (ZoL) bandwagon despite the fact that ZFS was originally release for OpenSolaris in November 2005 (making it 14.5 years old) and it was officially rolled into the “main” Solaris 10 build/installer in Solaris 10 6/06 (U2) (making it 13.5 years old).

So, if ZFS is the goal, ZFS on Solaris is vastly more mature than ZFS on Linux.

And ZFS is vastly different than ext3, ext4, and XFS and yet people are learning how to work with ZFS, so…

“…or even other Unix systems…”
Solaris comes from SVR4 which comes from Unix versions 1-4. BSD part of the tree comes from Unix version 5-6. The two branches split/diverge from very early on.

Most of my mainframe Unixes (e.g. AIX, HP-UX, Solaris) are SVR4 derived. SGI IRIX is an interesting one because it’s taxonomied as SVR4 with BSD extensions. (Sources: https://upload.wikimedia.org/wikipedia/commons/7/77/Unix_history-simple.svg, https://en.wikipedia.org/wiki/IRIX)

Oracle has a very nice 304-page ZFS administration guide (https://docs.oracle.com/cd/E37838_01/pdf/E61017.pdf) that pretty much will tell you how you need to do almost everything that you will need to know what to and how to do, in regards to the administration of ZFS in Solaris.

The oldest ZFS Administration guide that I can find/that’s still available is for Solaris 10 9/10 (https://docs.oracle.com/cd/E18752_01/html/819-5461/index.html).

My point is that it’s really not that hard to learn and run Solaris. Sun/Oracle has EXCELLENT documentation.

I used to run Solaris 8 and then Solaris 9 on a Pentium MMX 166 MHz processor because it wasn’t difficult to run and it also didn’t take much to run it as well. (I still run Solaris, more specifically, Solaris 11.3, in a VM because it’s simple/easy, and it doesn’t take much to administer the system.)

Linus is running Unraid for his servers but generally LMG is a Windows environment.

Solaris is free, and it runs on x86/x64 commodity hardware. (Oracle still sells x86 servers, but you don’t have to buy their hardware to run Solaris.) Like I said, I run Solaris 11.3 in a VM that runs on an Intel NUC7i3BNH (Core i3-7100U, 4 GB of RAM).

this argument/talk of ZFS seems OT to me as well…

Another topic should be started on the LTT video.
This thread was about troubleshooting Epyc and NVME (for those that wanted more detail)


but until a mod cleans all this OT up

Why zfs…

IMO

they started with storage spaces… eww

then they called in wendell for help to figure out the errors/issues

part of troubleshooting involved. trying other File systems.

ZFS being something wendell is (kinda/sorta/maybe) into…

So an argument can be made that wont be in the final setup LTT uses… but its something familiar you can use as a baseline.

but after all the other issues… there are compounded issues with using zfs on that system (hence read the issue tracker on github about zfs)

@alpha754293 @OrbitaLinx @thexder1 sorry about that guys, you can discuss further here if you want.

5 Likes

You really should read the license, you can download and install it for free for developing , testing , prototyping and demonstrating your applications, and not for any other purpose

I had run solaris for a while and it is definitely far different from Linux. As I understand it Linus has people who know Linux at least fairly well, but it takes quite a lot even with the documentation to come up to speed on solaris. Also ZFS on Linux is quite mature as they based on much of the code that was used in BSD for ZFS which has been around for quite a while. Also the ZFS on Linux project (OpenZFS now) is being developed very rapidly and FreeBSD even dropped their ZFS implementation in favor of the one from ZFS on Linux because it is so much more mature.

If you want a familiar OS and not have to retrain or maintain personnel for Windows, Linux, and Solaris, then in my opinion ZFS on Linux is a good option. Also Linux is kind of lacking in the newer more advanced file systems like ZFS, especially since BTRFS is so slow on development and is still not at a stage where the developers say it is production ready. ZFS may not be quite there yet either, but based on my uses of ZFS on Linux it is far closer than BTRFS and works quite well.

Also of note I have used Linux and BSD quite a bit and when I used Solaris it definitely seemed like none of my knowledge of those two helped me get started with it.

Here is the thing about that though, I tried looking up the commercial licensing cost for Oracle Solaris and the only thing that I’ve found is the pricing for their premium support (https://shop.oracle.com/apex/f?p=DSTORE:PRODUCT:::NO:RP,6:P6_LPI,P6_PPI:27242443094470222098916), which for 1-4 sockets, is $1000/year.

I’ve reached out to them to see if I can get further clarification in regards to that.

To your point though, they just spent how much on their hardware and now they’re going to cheap out on $1000/year support contract from Oracle, where they would be able to get the help they need from Oracle engineers in order to deploy their server especially for this purpose?

It’s like saying that you’re going to buy a Ferrari and then only put 87 Octane gas in it. Like…really??? (And then wondering why you might run into problems with the Ferrari after doing that for a while.)

So…of course, it’s entirely up to them. But if you’re actually running LMG, and you’ve already invested let’s set between $9600-$12000 in the used NVMe 3.0 x4 U.2 drives that were coming off from places like Facebook, not to mention the cost of the server, the CPU, RAM, etc… – I mean…certainly, by no means are they required to do that.

But if they really want ZFS, why not deploy it from the source that developed it in the first place, given how performance is so crucial and critical to LMGs operation?

That just seems kinda silly to me.

(What’s also sillier, I think, is that there was no mention on Wendall suggesting that LMG should be running NVMeoF over either like SMB Direct or NFSoRDMA or iSER when you’re trying to manage that many NVMe devices simultaneously.)

Below is a picture of the headnode to my cluster, which is running four HGST 6 TB 7200 rpm SATA 6 Gbps HDDs in RAID0 on a Broadcom/Avago/LSI MegaRAID SAS 12 Gbps HW RAID HBA on an Intel Core i7-4930K with 8x 8 GB Crucial DDR3-1600 RAM (64 GB total):

As you can see, I’m pushing close to 1 GB/s (8 Gbps) transfers for my current computational fluid dynamics (CFD) run whilst using “spinning rust”.

(CentOS 7.7.1908, NFSoRDMA, RAID0 array is formatted using XFS, using the “in-box” drivers for my Mellanox ConnectX-4 card, and I didn’t bother doing any tuning for XFS, NFS, nor IB because my network/storage workload varies too much to settle on one set of tuned parameters other than the default.)

If I can hit almost 250 MB/s on spinning rust (write speeds as it is transferring data back), what do you think that I would be able to do with NVMeoF on ZFS in Solaris?

I mean…either you want it to work (which clearly ZFS on Linux didn’t really work for them) and for it to be stable, or you want to save the $1000 and then have it…well…result in this.

if it were me, and the whole point of the exercise was to build a blazing fast NVMe based SSD server so that my editors would be able to edit directly off of it, given all of the other costs that has already been invested into it, I would much rather pay the extra $1000 so that I can call up Oracle’s engineers and get some help for when things aren’t working - again - as it was here. Except that it was ZoL, which means that Linus had to call up Wendall instead.

And it is quite widely recognised in the industry that ZFS on Solaris is far more mature and stable than ZFS on Linux (given that ZFS on Solaris is at least 13.5 years old).

Lol

Having been witness to Oracle support… It’s a joke. Most their products seem half assed.

I’d bet zfs on Solaris has not addressed performance issues on nvme yet either.

I don’t disagree that Solaris is quite different than Linux, but my point is does that even matter?

If that was the point, then they should even be going with ZFS (and/or Wendall shouldn’t be suggesting it) because ZFS is about as different to ext3/ext4/XFS as Solaris is about as different to Linux.

If the point is that they shouldn’t use Solaris because it’s different than Linux, then likewise, I can use the same logic and apply it to ZFS because it’s different than ext3/ext4/XFS.

This point of logic applies equally to the OS as much as it does to the FS.

“Also ZFS on Linux is quite mature as they based on much of the code that was used in BSD for ZFS which ahs been around for quite a while.”

Mmm…debatable. So…at least in Ubuntu, ZFS on root (as part of the native installer) – I think that they’re JUST getting around to that now.

ZFS on root for FreeBSD was on 20 January 2014.

ZFS on root for Solaris was in Solaris 10 10/08.

ZFS itself as a user FS though, in FreeBSD, has been in it since 27 February 2008.

However, as mentioned, ZFS in Solaris has been present since Solaris 10 6/06.

So ZFS on FreeBSD has consistently lagged behind Solaris in terms of ZFS deployment, and in some cases, by quite a wide margin (~6 years for you to be able to boot off of ZFS in FreeBSD vs. being able to boot off of ZFS in Solaris).

I’m not so sure if FreeBSD dropped their branch of the ZFS on non-Solaris systems due to maturity necessarily.

ZFS Pool version number 5000 does address the question and the issue of compatibility between the different pool versions, which Oracle is still currently keeping separate. And as it is with just about everything, I’m sure that there are pros and cons for each strategy.

“If you want a familiar OS and not have to retrain or maintain personnel for Windows, Linux, and Solaris, then in my opinion ZFS on Linux is a good option.”

I think that it depends. My experience is probably vastly different than the majority of the people because whilst everybody was jumping onto the Linux bandwagon in the early 2000s, I jumped onto the Solaris for x86 bandwagon instead. My dad, who used to work in the computer/server room for banks, showed me that one of the potential career paths for me can be a SCSA for financial institutions and/or any other institution that still has mainframe computers as being at the core/heart of their business, which is true for a LOT of banks. Yes, a lot of banks are slowly moving over to Linux as well, but there are also a LOT of banks where they’re STILL very much dominated by SVR4 Unix because calculating the interest on 150,000 accounts – most SVR4 Unixes would barely even flinch at it. Linux is getting better, but there are still some major hurdles for Linux to displace SVR4 Unix on mainframes, in many data centers and server rooms still/yet.

Therefore; from my vantage point, I am already familiar with Windows, Linux and Solaris, so to me, this is a no-brainer.

And like I said, I’m not particularly the smartest nor the brightest when it comes to sysadmin/Comp Sci stuff, but Solaris, with their excellent documentation, make it easy enough that even I can administer a really simple system like a storage server.

(Sidebar: It’s too bad that there isn’t a press article about it otherwise I would be able to point you to an example of it, because I can only share what’s been released by our PR and Communications department.)

“…but based on my uses of ZFS on Linux it is far closer than BTRFS and works quite well.”

I don’t doubt that.

From my own experiences with ZFS on Solaris, there are still critical flaws with it – and the biggest one that I gripe about is the fact that there are no bit-level readers if your pool degrades or goes offline, which means that any attempts to recover the pool, unless you have a backup somewhere else (on tape or on other drives), you’re going to be trying to fix it with live data that you can destroy, quite easily, if you’re not super careful with it.

“Also of note I have used Linux and BSD quite a bit and when I used Solaris it definitely seemed like none of my knowledge of those two helped me get started with it.”

Umm…I think that it depends.

I don’t disagree that Solaris has a bit of a learning curve to it. But again, this is where Sun Microsystems/now Oracle’s documentation about the system really becomes important, and vitally crucial to the successful deployment of the system.

And I will also definitely say that Solaris 8 is definitely a lot easier to learn than say Solaris 11, because everything was just “simpler” back then. There wasn’t as much “stuff”.

For me and my journey, Solaris was interesting because of the things that I wanted to do with it, with it being a closed sources, proprietary OS, if something was noted as being an application that ran on Solaris, it ran on Solaris without any questions or issues.

Sadly, the same can’t be said in regards to my experiences with Linux. Commercial, and therefore; closed source applications that depend on rpms will not and do not run in Linux distributions derived from Debian. (Even though, both are “Linux”.)

And there are some open source applications that run very well on Debian derive distributions that struggle to compile on rpm based distributions. (e.g. I NEVER got OpenFOAM to compile in CentOS despite following their instructions on how to do so. It has consistently failed for me.)

So yes, having my Solaris/SVR4 Unix as a foundation, made jumping into basic sysadmin in Linux a lot easier. But when things don’t work (unlike the Solaris world), you spend a LOT of time googling the stuff as to why it doesn’t work, and/or how to just fix the darn thing so that you can move on/get on with it.

I’m running CentOS on my cluster headnode now because the apps that I use don’t run on Solaris anymore. And rather than having two separate systems - one just to manage the cluster and a separate one to be the high speed scratch disk, I tasked my headnode to perform both of those tasks right now.

But if I were going to be expanding, then I would probably have a separate distributed parallel cluster file system storage server (either Lustre on ZFS or GlusterFS on ZFS) at which point, since they will just be storage nodes, the storage nodes can run Solaris. But that might be a future state/plan.

Also like I said though, I still use Solaris, but it’s relegated to really simple, dummy tasks (being a dumb web server) because I can install the OS, and edit/configure two files and my web server will be up and running. Done. It’s really quick and easy to do.

There are some commands (like digest) that I miss from Solaris, but it’s super minor.

Still, nevertheless, if ZFS storage server is what you want, I personally, would be deploying Solaris because I’ve spent years using it already that it’s really not that difficult for me. I’d have to cross-check the compatibility with my Mellanox 100 Gbps IB NIC now, and to make sure that they support RDMA and NFSoRDMA and/or NVMeoF, stuff like that – but that would be part of the pre-deployment research (which I would think that they should given that they used to have a Top500 supercomputer installation at TACC).

I know that RedHat offers an enterprise subscription/support contracts.

How are they?

it looks like since SuSE has been bought and sold a few times, they offer 1-2 socket support with 1-2 VMs, standard support for 670 Euros/year. I’ve never worked with their support, although they did reach out to me a few years ago in regards to selling it to me or my take on SuSE was, and there were problems with it.

The mgag200 driver was having a memory leak issue with X11 that was causing X to consume all 128 GB of memory that I had in my system. And it was the folks on the Xorg mailing list that helped me figure it out and ultimately resolved it, not the folks from SuSE.

Soo…there’s that.

I don’t doubt that ZFS on Solaris is a good option, but in my experience it was far quicker and easier for me to come up to speed on ZFS than on Solaris so I would still prefer to run ZFS on Linux or BSD installations over Solaris installations.

I would also point out that creating your own system that you can support and make multiple videos about is likely far more important to Linus than getting enterprise support package. In most environments I would definitely agree that enterprise support is the preferred route to go, but it will as always depend on how good that support is vs what your personnel can do and the time investment needed on both ends of that. In my experience having personnel that can do most of the support results in far faster turnaround times on issues than just about any support package and being down for days while going back and forth with support asking stupid questions and usually the same questions over and over again is far more costly than just fixing it yourself.

Edit: Also of note I was unaware that the license was so cheap, I remember looking before and seeing prices more like $10,000/year or more. I still think Solaris is not the best idea for them, but that does make it far more appealing than I thought it would be.

I don’t think SuSE is a good example here. I have not dealt with Oracle support, but I have dealt with Red Hat support a little and had a good experience there. Most support that I have dealt with though has been so bad that it ends up worse than no support (talk to them hoping for some actual help, but instead spend weeks or months just repeating the same thing over and over again with no progress). Unless I know that support is actually good I generally will not count on them being helpful and try to fix the issue myself after creating the ticket.

Also worth noting that you happened to show in your post is that Linux has many communities around the software involved that can and in many cases will help out without requiring paying for support. That does not exist for Solaris as far as I know. So with Linux you can likely get support for dozens of places, some paid some not, for Solaris you generally get Oracle support or support from a very small community.

“That does not exist for Solaris as far as I know”

Again, it depends on what you’re doing.

Please keep in mind that the entire ZFS project started in open source (perhaps somewhat ironically), and was first released in the open source version of Solaris (OpenSolaris) which meant that the original developers had either like a Yahoo group or a mailing list (or both) as community places where you can ask questions and get help.

Since I no longer use ZFS for main, production, data hosting, so I also haven’t ran into critical or catastrophic issues like I used to with ZFS that required assistance; part of the reason why my Solaris system (which already runs ZFS-on-root, which ZoL is still experimenting with, BSD has it since 2014) is just relegated to simple, “dummy” tasks because ZFS has been proven to be not stable enough for me to deploy it on my production servers.

(cf. https://www.delphix.com/blog/openzfs-pool-import-recovery)

(I’ve been where that guy has been as well.)

So I can’t speak to the current state, but there definitely used to be a community/user group for ZFS users and ZFS developers, much like man FOSS Linux software.

But again, my experience with Oracle support has been ok.

They’re not the most timely in terms of getting back to you, but if you ping them enough, they will respond and in my case, it did result in a total loss of data on the zpool/ZFS array, akin to what’s outlined in the blog post above, which to me, is a fundamental and critical flaw to the design and architecture of the system.

I think that failures of something that’s shipped with a distro is entirely fair.

Linux people will often comment on or complain about software failures that are inherent in Windows and the same is true with MacOS (and vice versa).

I’m of the opinion that if you ship it with the OS, you should have tested it and validated it, else, don’t ship it with the OS.

Redhat support - again, I think varies.

If you submit a bug to their bugtracker, sometimes it’ll just sit and no one will look at it. (This happened when I filed the bug where in one of the updates for CentOS (which is built off of RHEL), an update that was published will result in sig15 being sent to all active network connections whenever you changed runlevels. To my knowledge, I’m not sure if anybody ever read the bug, acknowledged that it exists, and/or did anything with it.

Course, that’s different, than if you have an actual service contract with them, but bugs like that shouldn’t require you to pay money to them in order to get them to pay attention to it.

I only really said that because SuSE as a company has been passed around a bit and I don’t think is representative of commercial Linux distros anymore. I agree they should be better about it, but it is hard to keep up with that when going through several transitions like they have.

Edit: I will also point out that I was specifically only talking about paid support as that is what the discussion was primarily about when comparing Solaris vs Linux here, other than my comment about not being aware of good community support for Solaris that is. As for community support that will vary quite a bit and I have not had much luck with any that I have tried to get help from other than Gentoo, though I have avoided them for quite a while preferring to learn and figure things out on my own.

But that’s the thing – in its as-configured state – the people that work at LMG weren’t able to support the system. This is why Linus called Wendall.

That just provides the evidence that contradicts your statement.

They couldn’t support the system, in its as-configured state, by themselves.

In fact, technically, the system failed twice - the first time when they tried to do what they know and/or always/do what’s familiar to them. That failed.

And then the call went out to Wendall, and that’s when the whole thing about ZoL came up. And then even with that, that still failed. Which is how they ended up with the configuration that they’ve got now, which doesn’t actually technically achieve their originally stated goal/objective.

Therefore; given that, it is functionally irrelevant, per that remark, in regards to whether they’re able to set up/configure, and administer the system whether it is Windows, ZoL, or Solaris – your statement will still hold true across all three platforms.

Therefore; I am of the opinion that if that’s the case, you pick the platform that will be able to actually achieve what you intended on achieving in the first place and if there is a desire to cram ZFS on that system, that’s fine, then pick the “best” ZFS system that will actually deliver on the stated objective.

Again, at the conclusion of that video, they had to compromise on it so that they can just get it up and running rather than delivering on an actual solution, per the original SOR.

Can you show evidence that Soalris would have been any better in this scenario? I don’t think it would have been and Linus would likely have not known anyone he could turn to for help other than maybe Oracle and based on what I have read that support would probably have taken far longer to get any help and the help might have been to get different hardware.

Thanks for linking that thread. Some interesting work going on there.

1 Like

You had also made this statement previously as well:

“Also worth noting that you happened to show in your post is that Linux has many communities around the software involved that can and in many cases will help out without requiring paying for support.”

So…hence my confusion - where you mention community-based support, again, per your own statement, in the midst of a discussion around paid support.

Hence my confusion.

I’m subscribed to the xorg mailing list (for Linux), the OpenMPI mailing list (predominantly for Linux), and the GlusterFS mailing list (also for Linux) and they’ve been helpful.

There was another mailing list that I had tried to join in order to try and ask some people a question about a problem that I was having, but that mailing list was so poorly managed/operated that I don’t even remember what it was for. They pretty much literally just outright, ignored you. I forget if it was for the NFS mailing list or something (because I was trying to find/figure out where I might be able to find a pNFS deployment guide somewhere, which I still don’t find/see one. pNFS is talked about from the client’s perspective, but as far as deploying a pNFS server, I don’t see any sort of deployment guide in regards to that.)

But they have it. And I’ve used SuSE before and like I said, their sales team reached out to me in order to get my feedback about the OS and if I had any questions and so I just told them what are my barriers to implementation and adoption (and therefore; their sales).

As I’ve mentioned, I wished I had that kind of NVMe hardware to play with because then at least I would be able to test it for them and write the deployment guide for LMG and teach/explain to their guys (who, don’t get me wrong, I think are very competent people in the realm of technology, so I think that they will be able to pick up what they need to pick up to get the system up and running).

And also like I’ve said, the only reason why my cluster headnode is running Linux right now is because I don’t want to have separate machines that manages my cluster (i.e. serves as the headnode) and a separate machine for the scratch disk system (running over 100 Gbps IB).

In theory, yes, I can split those two responsibilities out to two separate machines whereby one system could or would be running ZFS on Solaris and its sole responsibility would be to dish out/serve up data whilst the Linux headnode is still the manager for my cluster. But again, since NFSoRDMA runs on CentOS with the inbox Mellanox drivers, it doesn’t drive me to a need to split these two responsibilities onto one system each.

Just because I can doesn’t always necessitate that I should.

Therefore; to answer your question, no.

But it isn’t for the lack of trying. If I were to set it up for the sole purpose of proving a point – yes, I can do that, but that’s ALL that it would do, and I’ll likely get flamed for that anyways.

I used to have an old SunFire X4200 server that we used for LAN parties when I was in college that had four 73 GB 10krpm 2.5" SAS drives and serving up the games and the updates over the quad GbE NICs that the system had and using iostat, the ZFS pool/array barely made a dent with over 40 players connected to my system, simultaneously.

Yes, it’s a much older example, but again, ZFS can do it. And so long as Solaris 11.4 supports NVMe devices and RDMA (which for the RDMA portion of it, their installation at TACC suggests that it should support RDMA because they’ve had it a long time ago and they still repackage/resell IB hardware, as far as I can tell (last I checked was maybe like 6 months ago), and Sun also sold NVMe SSD AICs, so that should address the point in regards to NVMe support – but as long as it supports it, and the system registers the devices using their c0t1d0 nomenclature scheme, then you can create ZFS pools with them.

Again, I would have LOVED to be been able to play with that, and to write the deployment guide for LMG, if I were given the chance. But I wasn’t, so I can’t generate the data/evidence of it.

In regards to them knowing or being able to find somebody for help – well…I mean…that’s kind of up to them.

I’ve been posting in their forums that I run my micro-cluster with NFSoRDMA, so…they could have reached out to me and asked me “hey…we’re trying to do this. Do you have any ideas of how we might be able to accomplish this?” But they didn’t.

And I don’t fault them for reaching out to Wendall. Wendall is also very capable in regards to his knowledge and understanding of technology.

But ZoL (and even ZFS on BSD) is not necessarily all of the possible solutions.

This is also part of the reason why SVR4 Unix sysadmins are a dying breed because people just typically don’t play/screw around with it, and therefore; educate themselves on it. When I started learning it, I was in my teens, and figure that if what I wanted to do (as an engineering, working at one of the automotive OEMs in the SE Michigan area) didn’t pan out, that I would be able to quickly switch gears and come up as a SVR4 Unix sysadmin very quickly because I have been training and using this stuff since then.

(It is also for that reason, other than then sometimes, stuff just doesn’t work in Linux, that made the jump to Linux so much easier for me. And also from SVR4, I was also able to jump to the BSD-based/derived MacOS as well. Also like I said, a lot of Linux folks are just getting around to testing/using ZFS on Linux over the past couple of years, so it’s the new, shiny thing in the Linux world. From where I sit, ZFS is a teenager. Whilst a lot of people are really excited to be like “hey! check out this new thing called ZFS”, I’m like “yeah? so?”

uptime: 13 years 180 days.

Yeah, I’ve been using it already.

re: hardware
Ummm…maybe. I mean…it will depend on what’s published on the Solaris x86 HCL.

But I mean…that’s a community-built list, so…(people who wants to run Solaris and will test it on their hardware to be able to determine whether it will or won’t run).

So, if the intent is to build a server that you have confidence it will run in Solaris, then yes, your first stop prior to capex/hardware acquisition is a trip to the Solaris x86 HCL. (Conversely, as Wendall has discovered, not all x86/x86_64/x64 hardware is supported by Linux as well. (cf. boot with mce=off in Linux until the 5.5 or 5.6 kernel is released.)

And Windows has problems with managing 128 threads, so…there is definitely uncertainty all around. (No word yet on trying to run Hackintosh on the 3990x/AMD EPYC 7742.)

If I were presented with the opportunity to try and I fail, that’s fine. At least I would know. But the opportunity never presented itself, and therefore; it’s still in the “maybe” bin, until someone can try and either rebin it as a “yes” (it solves their SOR) or “no” it doesn’t.

Right, I don’t really care what their sales people say or that they tried to sell to you or even that they have the product for sale. The fact here is that SuSE has gone way down hill in the past decade or so and because of that they are a bad example when comparing the paid support provided by different companies.

Considering that Linux gets far more development than Solaris these days from large enterprises trying to do these things and from individuals who want to test out things like this on the latest hardware I suspect Solaris will have the same problems that Windows and Linux and very possibly worse. The difference here is that thanks to Wendell we have workarounds in Linux. Does Solaris have option to set block devices to polling instead of interrupt? Does Solaris have a fall back like Linux does to poll block storage if the interrupt is not returned after a period of time?

I knew about ZFS quite a while ago and that is one of the big reasons why I tried to get into Solaris. The reason it seems new is because the support for ZFS on Linux is growing, it was stagnated for a long time because of the incompatible licensing that prevents it from being merged into the kernel and for a long time was only able to be used as FUSE which is really bad. So the kernel module that is available now which is only in the past maybe 3 years become stable enough to be used is a big deal for Linux users and does deserve to be talked about because there is a lot of new development going on there where they could easily pass up the Solaris implementation in a few more years ( I think they have added some feature(s) that the Solaris implementation does not have already).

Yeah, and? That is not news to anyone, I can find plenty of x86 hardware that does not work well in Windows. Just because there is hardware that is not well supported or not supported at all in an OS does not matter at all unless you want to use that specific hardware. On the Linux side few hardware vendors support Linux so most of the drivers end up needing to be implemented by the community who may not have all of the information required to build it. On BSD that situation is worse and on Solaris it is even worse at least when talking about x86, and ARM.