1:18.29 on my Thinkpad T460P with an i7-6820HQ.
0:49.42 on my stock i7-5820k.
I'll post a Fedora benchmark later on vs my Windows one
Looked into OCing my cache, but didn't find such options right away. I did notice that my 2666MHz RAM was running 2133, so I turned that up.
Booted into Windows and did 5 runs :
38.60
38.41
38.38
38.43
38.41
Not sure how much more this CPU has to give. I can crank it up another 100MHz but then I'd need to move to liquid cooling or risk killing my motherboard.
I assume that someone who actually knows how to do this OC stuff can get another second or two closer. But this is as far as my skills can take it.
My first run without any tweaking on my four year old 3930K at a very conservative 4GHz overclock my result was 1.06.
Not so impressive as I had hoped.
I know I did not win the silicon lottery with my CPU only ever being able to achieve 4.2GHz OC 24/7 stable. I cut the OC back to 4GHz some time back once it became obvious I would not be financially able to keep ahead of the curve and this PC was going to have to last a good number of years.
My old laptop has this CPU:
AMD Phenom(tm) II N660 Dual-Core Processor running at 3 GHz.
Running Fedora I get five minutes and 14 seconds.
5820k at 4.5GHz on Windows 7 reporting in. 5 Chrome tabs, Discord, and a few peripheral utilities open. I got 39.95 seconds as a best of three runs.
1:15 on a Xeon E3 1231v3 not bad for a cherry picked slightly gimped i7 4770.
Went into the BIOS and changed a couple of settings. Disabled Virtualization and bumped the CPU to run at constant 3.8Ghz all the time. Shaved off 7 seconds.
5 runs for the lulz on my mobile workstation: I7-3630 QM @ 2,4 GHz; Turbo 2,9 GHz
Min: 01:19.57
Avg: 01:20.168
Max: 01:21.93
I wonder why your time was 10 seconds slower than mine.
Overclocked my i5 4690k to 4.7 GHz and got 49.95 seconds but my system crashed almost instantly after the render. The most I can get when my CPU is stable is 50.64 seconds @ 4.6 GHz.
Linux Mint 17.3. Time 02:39.59 on a T430s with an i5-3320M (Dual core, hyper threaded. 2.6GHz, 3.3GHz turbo which was sustained during this render)
Redid the test twice on Windows 10 on the same machine. Best result 03:26.94. I couldn't believe the huge difference, looked at task manager and the render utilized ~96% of the CPU ressources. This was ~98% in Linux, due to it being a lighter distro, but the 2% difference does not explain the render time difference at all.
Went back to Linux thinking the first test had the advantage of a "cold" chassis, so maybe the bad render time in Windows was due to it not being able to turbo boost. So immediately did the test again in linux, got an even better result than the first time 02:37.26.
TL;DR: Linux did it in ~76% of the time it took Windows 10.
Windows 7, 8 or 10? Just curious.
Windows 10.
Hmm thats kinda interesting..
Wich cpu + clockspeeds?
Intel i7 Extreme 4.8ghz 3960X
I've had something like this before, well kind of.
I had a Renderman Render off between this machine, and my older Westmere dual Xeon workstation running CentOS. The Westmere one beat it by about 5 seconds. And of course the i7 extreme was on Windows 10.
That old Westmere workstation is now acting as a server running Ubuntu Server LTS. Maybe I should run this benchmark on it for giggles see what numbers it gives.











