bwbug: John Payne's survey

Fred Pedrotti fred at fixdot.com
Mon Nov 29 18:00:11 PST 2004


How about a form perl script ie
http://www.perlservices.net/en/programs/allform/index.shtml

on the bwbug site,

Jim Melanson makes some nice perl / cgi scripts. The above would fit the
bill for collecting the data.

All the best,
Fred Pedrotti


> Przemek,
>
> I agree with your observations. The future of high performance computing
> will be Beowulf type systems using AMD and Inetl processors. John Payne
> would like to survey the Beowulf Users Group's Systems and combine the
> data with his earlier report. The combined data could be made available to
> the BWBUG members. Do you have any suggestion on how we might get the
> members to provide the information? Would you and possibly Don Becker or
> others be interested in forming a informal committe to find ways to
> collect the data?
>
> Mike Fitzmaurice
>
> -----Original Message-----
> From: bwbug-bounces at bwbug.org [mailto:bwbug-bounces at bwbug.org]
> Sent: Tuesday, November 23, 2004 10:40 AM
> To: bwbug at bwbug.org
> Subject: bwbug: John Payne's survey
>
>
>
> I wanted to share with you some thougths on where the computational
> clusters seem to be going, based on numbers from John Payne's
> survey. One can estimate an average number of CPUs per
> cluster(*). Since John's customers presumably represent a
> cross-section of current active cluster users, I suppose that this
> number represents a current sweet spot for cluster applications: a
> compromise between performance, scalability limits, price,
> administration and environmental burdens, etc.
>
> I note that John's cohort, on average, uses 128 CPUs per cluster, and
> they expect this to be true for the next 2 years or so, as well.
>
> Recently, I had a look at the numbers for the "Top 500 Supercomputers"
> list and I compiled a similar statistic. Top 500 is of course more
> ambitious, and more architecturally diverse: the overall number of
> CPUs is 408629, for an average of 817 CPUs per cluster. Divided by
> architecture, the dominant ones are:
>
>       #systems share(%) #CPUs   CPU/cluster
> Intel   318     63.6    194685  612.2
> Power   54      10.8    65460   1212.
> HP      50      10      26064   521.28
> AMD     31      6.2     25296   816
> Alpha   12      2.4     23512   1959.3
> Nec     10      2       6488    648.8
> PowerPC 8       1.6     51664   6458
> (other architectures add up to less than 5% of the total number of
> systems)
>
> The same list, ordered by the number of CPUs per cluster:
>
> PowerPC 8       1.6     51664   6458
> MIPS    2       0.4     7168    3584
> Alpha   12      2.4     23512   1959.3
> Sparc   4       0.8     5348    1337
> Power   54      10.8    65460   1212.2
> AMD     31      6.2     25296   816
> Nec     10      2       6488    648.8
> Intel   318     63.6    194685  612.2
> HP      50      10      26064   521.28
> Hitachi 4       0.8     1548    387
> Cray    7       1.4     1396    199.4
>
> I have several observations:
>
>  - IBM seems to own the biggest cluster market (i.e. the ASCI
>    installations)---noone else can provision a 6000 CPU/cluster :)
>
>  - MIPS, Alpha, Sparc and NEC are well supported commercially and
>    are used to implement large clusters, but they are probably legacy
>
>  - Intel and AMD together constitute 2/3 of all installations, among
>    the Top 500 and cluster size for them tends to be in the 600-800
>    range. This will probably also represent the future mainstream
>    high-end profile.
>
> So, a typical workhorse production cluster is for a reasonable future
> in the 128-CPU range, while high-end clusters use 5-10 times as large
> installs. Our own experience bears this out: we have around 128 cpus,
> in a typical computer room with modestly upgraded electrical and HVAC
> facilities. Anything more would require further fairly heroic plant
> upgrades.
>
> 	Greetings
> 		przemek klosowski, Ph.D. <przemek at nist.gov>  (301) 975-6249
> 		Mail Stop 8560, NIST Center for Neutron Research, Bldg. 235
> 		National Institute of Standards and Technology
> 		Gaithersburg, MD 20899,      USA
>
> (*) John gave the number of clusters he surveyed, and
> the total number of CPUs, for both existing installations, and for
> expected new installations in a (I think) two-year forecast.
> _______________________________________________
> bwbug mailing list
> bwbug at bwbug.org
> http://www.pbm.com/mailman/listinfo/bwbug
> _______________________________________________
> bwbug mailing list
> bwbug at bwbug.org
> http://www.pbm.com/mailman/listinfo/bwbug
>



More information about the bwbug mailing list