How come, the more evil Oracle become, the brighter the memory of Glorious Sun shines?
No idea about future OpenZFS, but the legacy was from a golden age…
How come, the more evil Oracle become, the brighter the memory of Glorious Sun shines?
No idea about future OpenZFS, but the legacy was from a golden age…
Is it just me or has this been the year of linux self-imploding. This feels like way more drama then we usually have in these projects.
Gotta keep up with what Microsoft does
That would be ideal. But given how btrfs is in bug limbo and bcachefs is self-destructing, I wouldn’t get my hopes up without lots of corporate funding. It seems this area is just too difficult for the current FOSS ecosystem.
But technically speaking: it’s the code that’s the problem, not the design. If no original pre-fork code remains, and all relevant post-fork contributors agree, then there’s a shot. But that’s just reality, not legality; a rich and experienced adversary could easily crush you in the legal system no matter how right you are.
Huh, didn’t realise already posted…
Openzfs claim to have a patch to work around, unless I am reading wrong
We already have a prototype changeset against next-20250911, which includes this change. Short version: it’s no big dea
kernel 6.18 removes write_cache_pages · Issue #17751 · openzfs/zfs · GitHub
Oh for sure.
I’m not saying its realistic, just that to ensure that you don’t get sued for copying the ZFS implementation, you’d need to reverse engineer a spec, and then code to the spec.
Otherwise you’re still legally in limbo
IMHO; if Oracle were going to sue over ZFS, they’d have done so already.
I think given the issues they’d need to deal with if they were to try keep ZFS to themselves, they’d be shooting themselves. Not just in the foot - in the head.
Oracle on Linux is at this point basically their bread and butter against microsoft and making that impossible for themselves to maintain via a fork of the kernel would be suicidal.
I’m curious if this might be a way to force Oracle to relicense their part of the code.
In general I’m surprised that the OpenZFS folks didn’t switch to a dual license for new contributions and to code where contributors agree
It would have made relicensing attempts simpler down the line(now)
The company was literally founded to write software for the CIA and the CIA was their only customer for the first three years of their existence. A lot of people don’t know that.
There’s a bit of history, Sun was getting its lunch eaten by Linux. Expensive SPARC/Solaris boxes were being replaced with inexpensive x86/Linux.
In its death throes SUN did a very interesting thing - moved Solaris to an open source licence, which included dtrace and ZFS - Linux had nothing vaguely comparable at the time.
SUN made it’s money selling hardware and software bundled together, along with expensive maintenance contracts for said bundling.
The SPARC servers it sold were really good, they made x86 servers feel like what they were - the bargain bin home grade hardware dressed up as “enterprise” and stuck in a rack.
Had SUN managed to retain some popularity with it’s now open source OS, it would serve as a means to ship more hardware and maintenance contracts.
Solaris 10 (and then 11) was more advanced than Linux at the time, they had some pretty good features and were ahead of their time not just with filesystems but with containers amongst other things.
The pickle you find yourself in today dates back to some choices made during then, Sun could have gone with a GPL compatible license, which would have allowed dtrace, ZFS and more to go into Linux. Instead they invented their own incompatible license.
(Of note they did pick the GPLv2 for their state of the art highly concurrent CPUs.)
I didn’t work at Sun so was not privy to the conversations but I think they would have known it was incompatible and the implications of that.
Had it been compatible, all the big market differentiating features of Solaris would have ceased to be differentiating as Linux absorbed them, so from a Sun point of view incompatibility might have been the best business path.
But alas the giant of Sun Microsystems no longer exists. It was a great shame, I enjoyed my time with Sun systems and working with Sun employees.
And now it’s owned by Oracle, so there’s no chance of the license OpenZFS inherited being changed.
Yes I still remember the moment I opened a M5000 the first time and thought, “wtf that thing is build different!”
Oracle bought my company shortly after Sun, I think the first of February 2010 was my first day at Oracle.
I actually wanted to leave immediately, but I ended up staying for almost 12 years.
The last four were too many, the job wasn’t fun anymore when things really took off with the cloud.
Was reading Linus Torvalds - Wikiquote and came across a post by Torvalds that has his thoughts on ZFS and Linux circa 2020:
Wow he was seriously out of touch when he posted that.
I am fine with that ZFS stays outside of Linux source code. It may have its benefit in the long run from maintainer perspective.
Regarding the performance claim, ZFS works best with the spinning hard disks. It batches random writes into a big transaction group. Thus, random writes appear as sequential write, while random read can be taken care by cache devices.
Its NVME support is still work-in-progress. However, most benchmarks are done with NVME.
ZFS has some of the feature that EXT4 or XFS doesn’t have. It’s not fair we put them in the same benchmark. BTRFS and Bcachefs are good contenders, but the engineer hours going into them are not the same level as that ZFS had. They are far less mature.
I mean, they kind of were.
When Sun and their SPARC instruction set were at their height in the early through mid 90’s Intel didn’t really have “Enterprise” products per se. The Pentium Pro came out in 1995 and started to change that, but even it was not on par with something like a Sun UltraSparc server.
The Pentium Pro had a slight advantage in 32bit integer workloads that could help in databases and basic web serving, but the Sun/HP/IBM/Digital offerings absolutely killed it in floating point workloads. Like it wasn’t even close.
The Pentium Pro also had nowhere near the I/O capability of the traditional server makers. It did have the advantage of being able to run x86 software in a world that was rapidly becoming more and more “WinTel” on the client side.
You could run a Windows NT server on a Pentium pro, you couldn’t do that on the traditional Risc servers.
Intel didn’t even have a Xeon yet back then. (The first one came in 1998, I believe)
Sun wasn’t really competing with Intel back then. They were competing with the likes of Digital/DEC’s Alphaservers, IBM’s RS/6000 / eServer PSeries and HP’s 9000 series servers, all of them some form of RISC architecture running some form of Unix. I don’t think Intel was even going after the server market yet until the Pentium Pro launched in 1995.
The Sun/Digital/IBM/HP’s of the world had impressive looking products that went in impressive looking racks.
If you had Intel servers before 1995 they were probably just a bunch of beige desktop towers sitting on a custom made wooden (or stainless wire rack) shelf.
I remember seeing a famous picture from some game developer in the 90’s where they were showing off their servers. It may have been ID Software or Valve, or Westwood Studios or maybe even Blizzard. I can’t find it now, but I remember being thoroughly amused at how hack-ish it looked.
To be clear, there were some rack-mountable Intel systems, but if you had Intel servers - at least before '95 when the Pentium Pro started changing things, your “server room” probably looked something like this:

Or if you were a little bit more professional, maybe something like this:
But if you had Sun (or Digital, or IBM, or HP) you probably had a professional looking server room full of glorious server racks and other professional equipment.
This - of course - started rapidly changing in 1995. By 1996 businesses wanted to be able to run Microsoft Exchange servers, and SMB/CIFS based shared drives. The former could only be done on an Intel server. The latter could be done on RSC/unix servers through the SAMBA project, but it was certainly easier to just go WinTel across Client and Server and never look back.
Microsoft was using its dominance in the office client space with Windows, to force its way into the server room in a real way, and Intel was benefiting like crazy from this. It of course didn’t hurt that the mass commoditization of “Wintel” systems made an Intel server much (much) cheaper than a traditional Unix server.
The future (Harmonized Wintel Client and Server Room as opposed to old school Unix mainframes and their modern (then) equivalents) is much easier to adopt when that future also costs a fraction of the status quo.
It’s really interesting how quickly everything changed during that period, from the big Risc/Unix boys dominating the server landscape, to Intel becoming the dominant player and the big Unix boys starting to drop like flies and consolidate with x86 OEM’s within a few years.
Intel went from zero to “no one ever got fired for buying Intel” in a very short time-frame.
And no Intel is troubled, and AMD is growing their server room market share with their EPYC chips (up to 35% last time I checked) and they are also losing ground to ARM.
For those of us having entered the workplace in the early 2000’s it can feel like “Intel was always dominant and always will be dominant” but the reality is quite different. Things can change on a dime, and it feels like they are in the process of doing so now.
I don’t think so, he just has very different motivations from you or me.
What he says is correct: it is a real risk that Oracle (which is a historically litigious company) COULD potentially attack linux at some point after ZFS was merged into the kernel proper.
Do I think that is extremely remote and would be shooting themselves in the foot given their own use of Linux in 2025? Yes.
Do (potential, future) management changes often result in extreme bone-headed decisions for short term financial windfall? Also yes. Oracle could lose a lot of money, need a cash injection for executive golden parachutes as the company winds down, and suing every linux vendor could be the thing the management team running the ship at that time decide.
Consider: A company like Broadcom picks up the pieces - see what they’ve done to (the now shambling husk of) VMware vSphere.
Essentially merging ZFS officially would be taking the bet that Oracle - with either their current management team - or any future management team - or any successor who picks up their IP, will not sue the Linux kernel developers.
Linus doesn’t want to deal with that. He probably wants to sleep at night. It doesn’t matter that its not clearly “illegal”, or even if it is a fairly shaky argument that Oracle would try to make - fighting an ambiguous legal case against a multi billion dollar corp is going to be a massive financial cost and incurred stress that could be avoided by just… not merging ZFS.
ZFS is technically superior, linux has nothing comparable (and if you mention BTRFS, lol).
BUT… it does not alter the reality of Linus’ legal risk profile.
Me? I run ZFS on linux because it’s currently the best way to securely store files. But I’m not the one who’s going to get sued over it being integrated into the linux kernel upstream. I’m one of the biggest ZFS advocates you’ll meet. If ZFS on Linux goes away, I’ll just migrate my storage to Illumos or something.
What Linus said may have been very opinionated and very blunt - but also very honest and entirely accurate from Linus’ position.
And yeah sun hardware was great. PC hardware was, and still is, by comparison pretty shit (in terms of design).
BUT… those were the days of fewer, single purpose, bespoke, tightly engineered servers for an application.
Sure, time has moved on and PC hardware is now massively more performant… but it’s still rolling a turd in glitter. The architecture is still a turd. WE have enough additional layers of abstraction these days to not be so exposed to its smell… but its still a turd.
Now though - it’s like @wendell says (or has said, forget the video). Servers now are are cattle (not pets). Basically VM or docker container hosts. They’re cheap, disposable and if one breaks you just swap it out for another cheap box. Number of nodes (for most businesses - there are exceptions that still need big iron but they’re becoming fewer) trumps quality of the individual nodes. Nodes that are large enough to run multiple virtual machines of the average business’s largest workload are cheap.
It’s worth mentioning that sun Microsystems was not “pro open source” I worked at very high levels there until the end and even using linux internally was very, very frowned upon and the management all looked at the GPL like it was a puss oozing open sore on the world … there were lots of amazing minds working in engineering there but it wasn’t a bastion of hope for open source that you make it sound like.
Yet they open sourced their entire operating system, many of their tools and contributed to a serious number of open source projects, almost all of which were reversed almost overnight when acquired by Oracle.
Sure, Sun open-sourced their stuff but by that point the writing was on the wall. It was not some altruistic endeavour, it was a last ditch effort for survival in the face of faster more agile competition from linux which WAS open source.
While it is true that most of their famous open source projects were created by them as proprietary software (Java/OpenJDK, OpenSolaris, Dtrace, Netbeans, ZFS etc.) or acquired from others (OpenOffice, MySQL, Virtualbox, etc.) and later OpenSourced out of desperation, there were also some notable independent open source projects that Sun did not control and still contributed substantially to, like Gnome and Apache.
And yes, this is because they as a corporation benefited from these projects, but isn’t that why any corporation contributes to open source projects? Corporations are not typically fond of behaving like charities…