Wendel: Is there a Linux equivalent to Anvil?

I don't have Windows of any version, what can I use to torture test new drives on Linux? What other Linux stress testing software would you recommend to test the limits of new hardware before you would consider it fit for use or to check the stability of an overclock?

There's stuff like MPrime for system torture testing (and it's probably not safe to use on Haswell either), or you can just quickly CLI a quick test for a particular part, because all of the diagnostics and performance data is available for every single operation in linux. To test a storage drive, you can use the standard testing tools in Gnome Drives or whatever standard linux tool you like to use. You don't need particular software for that, it's built-in. The question is: why would you do that. Linux will not crash however hard you push the system. You can even rip out a graphics card in the middle of a session, or rip out a disk drive, and linux will not crash. X may crash and restart itself, but you won't even lose data, you won't even lose your current cursor position lol, at most you'll see the screen black out for a second or two. When X is replaced entirely, even that will not happen any more. So what would the torture test tell you? What would you achieve by torture testing the system in a non-functional way.

Linux systems that need certification, for instance linux systems that need to be high availability (and all of those systems are linux, the stability of linux is exponentially higher than Windows server systems) are tested with real life applications, there is no need for a consumer grade synthetic torture test in Linux. The drive diagnostics features in linux are also much more advanced in linux than in Windows. In Windows, a drive may suddenly fail on you. The chances of that happening in linux are very slim, you'll get a countdown clock warning from the time a drive starts to fail. For mechanical hard drives, you usually get a warning about 150 power-on hours in advance, which gives you ample time to backup your data and replace the failing drive. With SSD's, other things can go wrong, but the life span of SSD's for most users is much longer in linux than in Windows, because the file systems are more efficient (much less data needs to be written, for instance, the usual "writing zeros in files" that Windows does, isn't done by linux, only useful data is written), and safer to use, and the file systems have more features, and linux has more and better tools for data management on a file system level. Most linux users use their systems in a different way, they normally encrypt data volumes, so a lot of SSD's are of no interest at all, because for instance they use SandForce controllers that compress data as part of the acceleration technology, which impedes encryption, because encrypted blocks can't be compressed, so SandForce controller SSD's fall back to their baseline performance, which is usually around a third of the advertised speed of the drives. SSD's with linux-orientated controllers, like Samsung SSD's, have native encryption processing, and open source documentation, which means that on the one hand, your system doesn't have to do the encryption itself, which saves CPU time, and that the encryption is linked right in the BIOS, which makes the security level higher, and on the other hand, that the real maximum performance of the hardware is attainable, which it never is with handicapped filesystems like NTFS, because the hardware manufacturer actually contributes in the code of the filesystems, optimizing them for the hardware (Samsung even has contributed a whole new open source filesystem to linux, called F2FS, which is created as the highest performance filesystem specifically for SSD's).

So I'm not sure on what useful information a drive or system torture test would bring in linux. If you think that you need more performance, you could enhance the drive performance using a huge range of linux software tools, from bcache to raid solutions, which are all free and very reliable. You can also scale volumes freely by adding hardware without real form factor limitation, because of the great flexible partitioning options. There is just no useful information to be had from torture testing a single drive in linux, or from torture testing the CPU or GPU like one would do in Windows to make sure the system doesn't bluescreen... because the linux system will not crash, even with a crazy overclock, it will just perform less when it's not overclocked right, and under load, the CPU will get a lot hotter because linux uses the CPU more efficiently, whereas in idle, the CPU will be cooler than in Windows, because linux only does what you want it to do and no things behind your back.