This year's competition format and applications are slightly modified from previous years.Same:
teams will continue to compete for an Overall Winner, described below, and recognition will be given for the highest LINPACK performance. Different:
HPCC will not be used this year, conference participation will be included in the overall score, and the use of profiling tools are highly encouraged.
Understanding application performance and its execution on your chosen architecture is a crucial part of the competition this year. To assist you with understanding the applications and your architecture, Allinea is graciously providing all teams with software licenses to use their profiling and debugging tools.
The competition applications are:
- LINPACK Benchmark
The LINPACK Benchmark solves a dense system of linear
equations and is used to rank the top HPC systems in the world.
Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data. Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes.
The set of all RNA molecules transcribed in one cell or a population of cells is known as a transcriptome. Current sequencing technology allows the hydrolysis of RNA into short reads, generally of a length between 50 and 300 base pairs. One cell may produce hundreds of thousands of these short reads. The goal of Trinity is to take a set of short reads and reconstruct the transcriptome of the source cell or cells.
WRF (Weather Research and Forecasting)
The WRF Model is a next-generation mesocale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. It features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers.
The effort to develop WRF began in the latter part of the 1990's and was a collaborative partnership principally among the National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (represented by the National Centers for Environmental Prediction (NCEP) and the (then) Forecast Systems Laboratory (FSL)), the Air Force Weather Agency (AFWA), the Naval Research Laboratory, the University of Oklahoma, and the Federal Aviation Administration (FAA). The WRF Model is written in Fortran 90 and has interfaces written in C. It depends on MPI and netCDF libraries.
Sample Data Sets
- Severe snowstorm event over Colorado
- Hurricane Katrina
- Nested domain over Colorado
MILC is both the name of a physics collaboration and the code developed by that research group. The code is a varied suite of applications designed to study Quantum Chromodynamics (QCD) the theory of Nature's strong force. The MILC code can deal with both staggered and Wilson/Clover quarks. It has been used as benchmark by SPEC and for a number of supercomputer purchases, such as Blue Waters and at NERSC. The development of the code and the research that it enables have been supported by the US Department of Energy and the National Science Foundation. Recently, support from the USDOE SciDAC program has been used to develop high performance code for several lattice QCD community codes. This effort is coodinated by USQCD. The MILC code can make use of the QUDA library for GPUs and effort is underway to coordinate work on staggered quarks with the QPhiX library for the Intel Xeon Phi processor. There is an entry for MILC in the
Encyclopedia of Parallel Computing, edited by David Padua and published by Springer:
- HPC Repast
Are you ready for the zombie invasion? Will humanity survive the SC15 SCC? Teams will use HPC Repast to simulate a zombie invasion like no other.
Repast for HPC is a next generation, agent-based modeling system written in C++ that is useful in social science modeling to understand characteristics of a global population that are difficult to predict based only on the rules that govern the behavior of the agents.
- Mystery Application - A mystery application will be given to the teams on Monday morning, to be run in conjunction with HPL prior to the start of the Monday night Exhibit Gala Opening. This mystery application will take the place of HPCC used in previous years.
Revealed: The mystery application is the High Performance Conjugate Gradient (HPCG) Benchmark. You will be judged on your best HPCG run, however, you will also be judged based on your demonstrated knowledge of this benchmark and interaction with your chosen architecture. Demonstrated knowledge should include profile information (use your MAP profiling tool!).
Teams are highly encouraged to learn a profiling tool to gain a better understanding of scientific applications and their interactions with HPC hardware. Licenses from Allinea will be provided to all teams for the MAP profiling tool, but teams can use other profiling tools as well.
The Overall SCC Winner will be the team with the highest score when combining their correctly completed workload of the four competition applications, mystery application, best HPL run, application interviews, and HPC interview. The HPC interview will take into consideration the team's participation in the SC15 conference as well as their ability to wow the judges on their competition know-how.
Teams will be required to attend other aspects of the convention beyond the Student Cluster Competition, which will be included in their final score. Further details will be provided before the competition.