November 2006 A
High Productivity Computing Systems and the Path Towards Usable Petascale Computing
Suzy Tichenor, Council on Competitiveness
Albert Reuther, MIT Lincoln Laboratory

BCR Case Study Examples
Research Laboratory Example

At MIT Lincoln Laboratory, a federally funded research and developement center (FFRDC) for the Department of Defense, the research-oriented formula was used to evaluate the financial efficacy of a 600-processor, enterprise grid cluster solution that would be used by 200 users across Lincoln. The numerator value and each of the five denominator values were related to an average, fully burdened salary of $200,000 per year.

  • In terms of the time saved by users, a system evaluation yielded that approximately 36,000 hours of user time would be saved by the system 8.
  • The time to parallelize all of the 200 users’ algorithm and simulation code would take approximately 6,200 hours.
  • Getting users trained on using Lincoln’s system takes about 4 hours. Therefore the time to train is a total of 800 hours.
  • Given a 10 second job launch time and an estimated 10,000 parallel job launches per year, it amounts to 27.8 hours of launch delayed that would not be experienced by serial jobs executed on desktops.
  • The HPC system would be administered by one system administrator or 2000 hours.
  • Buying 200 CPUs (100 dual-processor server nodes) per year with an estimated $5,000 per node costs $500,000. That is equivalent to 5,000 staff hours.

Inserting these values into the BCR/productivity equation yields:


Then we can also determine the one-year IRR as 160%. When taking the full range of average programming rate and cost to parallelize into account they found that BCR = 2.6 to 4.6.

The MIT Lincoln Laboratory High Performance Computing team has compiled many examples of how their users are more productive when they use their interactive, on-demand enterprise HPC system. For example, one of the technical staff members is designing and evaluating algorithms to improve the weather radars used across the United States. When he was running his algorithm simulations on his very powerful desktop computer, they would execute in about ten hours. He was able to make adjustments or run different data sets twice a day: once during the business day and once overnight. He was trained to use the HPC system in a single morning, and he had parallelized his simulation code by the time the day was finished. His simulation on the HPC system executed in an interactive, on-demand manner on eight to sixteen processors and usually executed in less than an hour. That allowed him to execute between ten and twelve simulations per day, thereby enabling him to deliver more accurate algorithms and raise the level of confidence in the effectiveness of the algorithms. This translates into delivering much better weather radar effectiveness for his project, its sponsor, and eventually our nation. However, it is not the usual way in which HPC systems are used.

Pages: 1 2 3 4 5 6 7

Reference this article
Tichenor, S., Reuther, A. "Making the Business Case for High Performance Computing: A Benefit-Cost Analysis Methodology," CTWatch Quarterly, Volume 2, Number 4A, November 2006 A. http://www.ctwatch.org/quarterly/articles/2006/11/making-the-business-case-for-high-performance-computing-a-benefit-cost-analysis-methodology/

Any opinions expressed on this site belong to their respective authors and are not necessarily shared by the sponsoring institutions or the National Science Foundation (NSF).

Any trademarks or trade names, registered or otherwise, that appear on this site are the property of their respective owners and, unless noted, do not represent endorsement by the editors, publishers, sponsoring institutions, the National Science Foundation, or any other member of the CTWatch team.

No guarantee is granted by CTWatch that information appearing in articles published by the Quarterly or appearing in the Blog is complete or accurate. Information on this site is not intended for commercial purposes.