Rasberry pi - high density

Barry Shein bzs at world.std.com
Tue May 12 16:44:57 UTC 2015


To some extent people are comparing apples (not TM) and oranges.

Are you trying to maximize the number of total cores or the number of
total computes? They're not the same.

It depends on the job mix you expect.

For example a map-reduce kind of problem, search of a massive
database, probably is improved with lots of cores even if each core
isn't that fast. You partition a database across thousands of cores
and broadcast "who has XYZ?" and wait for an answer, in short.

There are a lot of problems like that, and a lot of problems which
cannot be improved by lots of cores. For example if you have to wait
for one answer before you can compute the next (matrix inversion is
notorious for this property and very important.) You just can't keep
the "pipeline" filled.

And then there are the relatively inexpensive GPUs which can do many
floating point ops in parallel and are good at certain jobs like, um,
graphics! rendering, ray-tracing, etc. But they're not very good at
general purpose integer ops like string searching, as a general rule,
or problems which can't be decomposed to take advantage of the
parallelism.

You've got your work cut out for you analyzing these things!

-- 
        -Barry Shein

The World              | bzs at TheWorld.com           | http://www.TheWorld.com
Purveyors to the Trade | Voice: 800-THE-WRLD        | Dial-Up: US, PR, Canada
Software Tool & Die    | Public Access Internet     | SINCE 1989     *oo*



More information about the NANOG mailing list