Ever since the Raspberry Pi 2 was announced, my head has become flooded with crazy ideas and I was wondering about the productivity benefits of Raspberry Pi Cluster and if they're viable for a Renderfarm. I've been doing some maths and I divided I've been thinking with 30 Raspberry Pis, it would have 120 cores and cost $1050, the same price of a 5960x.
Now I'm wondering if all you do is render in Blender trying to make the next Toy Story, would a 30-node Raspberry Pi 2 Cluster out-perform a 5960x and are there any other productivity workhorse things you can do with MPI with a 30-node Pi Cluster that would make it worthwhile? I'm not planning on building a cluster just for 3D rendering, but I'm curious. (maybe a cluster with 10 Pis max just to mess around and then break it up and have each Pi be application dedicated when I'm done experimenting with distributed computing)
Oh, on an unrelated note. I've seen you guys chatting about Linux alternatives to get ready to jump ship in case Windows becomes a walled garden, check out Natron for a possible After Effects alternative.
Pi's are great, but they simply lack the power of the Intel counterpart. More nodes in your system will be great, but keep in mind that you will have latency because all the pies have to talk to each other. Furthermore you will need to cool those pi's. Unless you dump them in a huge ass container of mineral oil. However, for the same price as the intel platform, if you factor in cooling costs, latency, and raw performance then the intel wins. Not by like magnitudes, but a substantial difference.
It pretty much boils down to; do i want to do a lot of cpu cycles, but slowly(heavy rendering), or do i want to do only a few cycles(light rendering), but faster. Is space an issue? Those node systems can get fairly large compared to a single cpu. Will I have enough money for cooling costs? Because if i can fit all of what I want to do on a single chip, with simple cooling, then Id go with that. Because all that the user would have to do is assemble the parts. Whereas you would have to build from scratch the case for your nodes, cooling, etc.
So... for the cost being nearly the same, the pi/node system just doesn't make up for the performance gap IMO. Because it will be cumbersome and you'd be paying almost the same amount of money as you would on the intel system but it would be on the same level of performance. Paying the same for less. Doesn't make a whole lot of sense. That being said, it would be one hell of a fun project. So if money is no object I say GO FOR IT. If money is a concern, maybe notch it down to a smaller node system, that would be more viable.
As far as I have seen it has only 1GB of RAM so any scene with complexity that requires render farm would not be worth rendering on Pi farm.
I have seen similar done with old Pis and it was fun to read about it, but that was it.
I manage renderfarms for Maya and Max and I would not try that experiment. On one side you mentioned 5960x which would simply render your scene, and on the other side you will have huge spaghetti system with pain of dealing with farm network, cloning of images, floating licenses, traffic, repositories just to have the same thing and is not worth it.
Imagine tomorrow you want to double the speed of your farm: Would you rater buy one PC or 30 Pis?
The best advice a teacher has ever given me is "if you can buy something, don't build it" and it is very applicable in this case. I "knew better" a few times and I have wasted so much money and time for nothing.
Well, my question was more thought experiment, I never actually was going to do it at that scale because I'm not big into rendering and if I was, I would get a quad Xeon E3-1230 V2 Render Farm. But I was planning on playing with a few (5-10) Pis and I'll experiment distributed computing and when I'm done I'll have one for with home automation, a media centre, a Samba Server, etc.
Dont do this. Lol. the sweet spot for computation is almost certainly something x86, at least for now, though if you went something pure arm [e.g. those new 50 core arm chips, etc], you might be able to do better. The software would make things.. interesting.. as would the time to do it.
In college another student and I built a beowulf cluster of 486s. It was about as fast as a dual-slot pentium II, for the things that would parallelize well. If you're budget constrained, the AMD offerings.. especially for something like blender.. even some of the advanced autodesk rendering engines work just fine with something like an 8350 and it is hard to beat dat value.
Plus networking a bajillion PIs will be not much fun.
Don't worry, I wouldn't waste $1050 like that. (Maybe $175-350 just for fun :P) If I really wanted a lot of nodes for distributive computing, I really wish there were inexpensive Intel Core M Boards. The Core M is god-like in performance per watt and I would like have them function as a water heater so you can help cure cancer while taking a shower, but I would be worried about lifeforms growing in there (Like what happened with Linus) and if something like this was government funded where they were giving out free water heaters and subsidize the electricity on the meter, I'm sure it would be used for the NSA to crack encryption.