How cost effective is a Raspberry Pi supercomputer?

The physics department at my university are talking about putting together a cluster of raspberry pi 2 for a "supercomputer". From where I am sitting, anything that they want to do with this can be done faster and cheaper and easier with regular desktop/server grade hardware. Would a cluster of RP2 really be more cost efficient than using xeons/gpus? The scaling wouldn't be perfect for a cluster, resource utilization is never going to be great with something like this, it seems like a headache to put together, and overall inefficient. Am I missing something here or are they the ones who are not understanding.

From an easy to handle number point of view, if I am not mistaken, the RP2 has ~90MFLOPS. For a random comparison, a Xeon E3-1231V3 has 3,120 MFLOPS. That is ~34x the performance (in a really rough sense). Assuming perfect scaling, it would take 35 RP2 in order to beat the performance of one of those Xeons. $35x35= ~$1200. Can you build a desktop based around a $250 E3-1231V3 for less than that? I most certainly think so. Likely close to half of that if you are stingy/cheap. And you don't have to worry about scaling, configuring, etc. I really don't see the point in doing this. It can't be more efficient (I won't do the math there), and it isn't more cost effective. Any process which likes a lot of cores like a pi cluster would have would likely also like gpus so gpu acceleration would be the way to go as the price per flop is even better there than with cpus. So I really don't see a point at all. Am I wrong?

I need this to happen....

I think that it is an interesting prospect and iirc, other peopel ahve made pi clusters as well. What is really getting me is that they somehow think that this is a cost effective thing to do or that they are somehow going to get an amazing super computer out of this. I hope that @wendell comes across this and has something with insight to say.

rasminurtech did videos about this, and the applications on youtube. probably butchered his name. but he did build one and showed a few examples of what can be done. just do a search for pi supercomuter. although imo the video relied on the user being comfortable with a putty interface.

i have a friend who needed something simliar. and his IT department said that a gpu would be much more effective for raw computation, although i am not sure if that same application can be made to yours.

Like this?????

1 Like

Well, it depends what they are doing with the Beowulf. If they are doing something that benefits from parallel computing, then a moderately sized Pi cluster could perform better than a single Xeon workstation. Of course, that assumes the point is practical rather than being purely academic, ie teaching the concepts and basic operation of cluster computing.

1 Like

If they buy a load of them, generally they should work out less than $35 a pop. There were some people in an article that did the same using PS3s (Although pointed towards using GPUs).
The application is going to depend a lot on which would be better.

From a teaching/learning perspective, I think that using pis for a cluster is a great idea. However, from what I have heard, and what I know about the physic department, they are not doing it for academic reasons. They are planning on making this for some reason. Likely running simulations or working through data from the telescope which is currently handled by an aging desktop cluster. Even still, I don't see how a cluster like this would be more efficient from a power per price perspective than a gpu. Any sort of simulation should be able to utilize gpu acceleration. Similarly, anything that a cluster could do, as far as I know, should be able to be made to be handled by a gpu as well. I guess that some things require clusters, but I highly doubt that what they are doing really needs a cluster. I think that they simply don't understand the performance of modern desktop parts. From the conversations that I have had with physics peopel and professors about computers seems to point towards that as well.

yes that one :)

1 Like

Well, very few things NEED a cluster to run. There are a few operations that only work in parallel, but not many (and many of those can be run in parallel on a single multi core processor). But there are plenty of things that simply run better on even a low end cluster than on a single workstation. Of course, a cluster is also useful in it's redundancy. If any one part fails, the rest can keep chugging along.

Another thing to keep in mind is that they may not be aiming for better performance, but rather more efficiency. By that I mean the low power usage of a Pi compared to a desktop. A maxed out pi uses maybe 4 watts at load. Meaning you could set up a Beowulf with 50 Raspberry Pis and only consume 200w when you are hammering them. That's quite a bit less than even a single modern Xeon based workstation would consume.

If they were looking for cost/performance though, an APU based cluster would probably be the way to go. Affordable, relatively low energy consumption, flexible in that they can be used for both CPU focused and GPU focused tasks, and just a few would likely out perform a single Xeon type workstation in parallel tasks, for about the same cost (likely less).

If you really want to know, ask them what they plan on using the Beowulf for, and what their reasons are for choosing a Pi cluster over the other options.

its not effective at all...

Its effective only when its small. If you try to compare it to normal x86 platforms its going to loose badly.

Today's GPUs can do about ~6000-8000GFlops (some can do more) for ~250W
Raspberry Pi 2 is 0.041GFlops in real performance ~4W