Welcome to the Single Board Cluster Competition 2024.
This site will contain all relevant info for competitors in terms of submissions and other logistics. For time-based announcements this site will reflect it when the time rolls over according to PST -- teams in forward time zones will be notified of specifics according to their local timezone.
TOC
Welcome to the SBCC24
The Single-Board Cluster Competition (SBCC) is a competition where teams from all around the world compete using single-board devices, and other similarly simple hardware, to create miniature supercomputing clusters. SBCC24 is the second competition. For information about SBCC23, check here: NeedLink
For SBCC24 we have two in-person teams, and two remote teams.
- Opening Remarks: Mary Thomas and Nick Thorne
- Competition Rules and Overview (Paco)
Teams
MODE | ORG | LOCATION | SYSTEM | TEAM MEMBERS |
---|---|---|---|---|
In-Person | UCSD | La Jolla, CA | PI3 cluster, 20 nodes | Zixian Wang, Aarush Mehrotra, Henry Feng, Luiz Gurrola, James Choi, Pranav Prabu |
In-Person | TACC | Austin, TX | PI4 cluster, 20 nodes | Julian Mace, Kimberly Balboa, Yaritza Kenyon, Erik Rivera-sanchez, Jacob Getz, |
Virtual | KU | Lawrence, KS | Orange Pi 5, 28 nodes | Abir Haque, Owen Krussow, Sam Lindsey, Yara Al-Shormanm, Shad Ahmed, Jamie King |
Virtual | AAU | Aalborg, DK | 17x Raxda Rock | Sofie Finnes Øvrelid, Mads Beyer Mogensen, Thomas Møller Jensen, Tobias Sønder Nielsen |
Schedule
Agenda:
DAY | START | END | ACTIVITY |
---|---|---|---|
Thurs | 8:00 am PST | 5:00 pm PST | Setup |
Friday | 8:00 am PST | 12:00 pm PST | Benchmarking |
Friday | 12:00 am PST | 5:00 pm PST | Competition Begins - Applications |
Saturday | 8:00 am PST | 3:00 pm PST | Final Submissions Due |
Saturday | 5:00 pm PST | Awards Ceremony |
Here is the FULL schedule
San Diego Super Computing Center / University of California, San Diego
Hardware
Item | Quantity |
---|---|
Raspberry Pi 3 Model B Rev 1.3 | 20 |
Unify Standard 48 | 3 |
Software
Rocky Linux 9.3 (Blue Onyx)
System Software:
- BeeGFS, Slurm, Spack
Compilers & Libraries:
- GCC, OpenMPI, OpenBLAS
Setup Tools:
- Ansible
Team Details
Zixian Wang is a third-year student majoring in Computer Science. He is doing NLP research in scaling-laws for chain-of-thought as well as mixture-of-experts models. Experienced in training and inferencing models on cutting edge systems with H100, A100, MI250, MI210, with a paper on MLPerf under review.
Aarush Mehrotra is a second-year UCSD student double majoring in Math-CS and Economics. He is interested in the intersection of high-performance computing and finance. He has experience in MLPerf, machine learning & AI, data science, and algorithmic trading.
Henry Feng is a third year UCSD student majoring in Computer Engineering. He is interested in machine learning and AI, and also hardware design.
Luiz Gurrola is a second year student at UCSD majoring in Math-CS with a minor in Data Science. He is interested in machine learning and AI as well as computer architecture. He has experience in programming and setting up systems.
James Choi is a second year student at UCSD majoring in Math-CS. He is interested in machine learning and applications of XR and blockchain technology. He has experience in Java and Full Stack development.
Pranav Prabu is a second year student at UCSD majoring in Computer Science. He is interested in machine learning, in applications like natural language processing and computer vision. He has experience with computer vision, machine learning, and system development.
Texas Advanced Computing Center / University of Texas, Austin
Hardware
Item | Quantity |
---|---|
Pi 5 boards | 20 |
TP-Link TL-SG1024D | 1 |
Anker 60 Watt Hubs | 5 |
UCTRONICS Upgraded Complete Enclosure | 5 |
Software
- System: Raspbian 64 bit
- MPI: MPICH
- Interfacing: Clush
Team
Name | Institution | Role |
---|---|---|
Julian Mace | University of Texas | |
Kimberly Balboa | Texas State University | |
Yaritza Kenyon | Texas State University | |
Erik Rivera-sanchez | Texas State University | |
Jacob Getz | TACC | Mentor |
University of Kansas
Hardware
Item | Cost Per Item | No. Items | Total | Cost Link |
---|---|---|---|---|
Power Supply | $43.99 | 1 | $43.99 | https://a.co/d/aef7134 |
Ethernet Switch | $89.99 | 1 | $89.99 | https://a.co/d/80uFMTU |
Power Strip | $8.99 | 1 | $8.50 | https://a.co/d/9DdrLzO |
Power Distribution Board | $38.00 | 3 | $114.00 | https://a.co/d/7JitnBU |
Power Monitor | $15.99 | 1 | $15.99 | https://a.co/d/8LErGh1 |
64GB MicroSD Card (30-pack) | $139.50 | 1 | $134.53 | https://www.aliexpress.us/item/2251832790078323.html |
Orange Pi 5+ | $179.99 | 5 | $899.95 | https://a.co/d/9byszpf |
Orange Pi 5 | $137.99 | 17 | $2,345.83 | https://a.co/d/hcaDbo4 |
Ethernet cable (grey,100ft) | $60.00 | 1 | $60.00 | https://www.mcmaster.com/8245K31-8245K14/ |
M4 Hex Head Screws (100 ct.) | $12.58 | 2 | $25.16 | https://www.mcmaster.com/91239A148/ |
14AWG supply wires (black,50ft) | $22.48 | 2 | $22.48 | https://www.mcmaster.com/8054T17-8054T378/ |
24AWG supply wires (orange,50ft) | $10.87 | 1 | $10.87 | https://www.mcmaster.com/8054T12/ |
24AWG supply wires (black,50ft) | $10.87 | 1 | $10.87 | https://www.mcmaster.com/8054T12/ |
Ethernet RJ45 crimp-on connector | $9.34 | 10 | $93.40 | https://www.mcmaster.com/68995K67/ |
Heatsinks | $7.99 | 15 | $119.85 | https://a.co/d/imQvaOH |
Din Rails 7.5mm depth | $6.30 | 3 | $18.90 | https://www.mcmaster.com/8961K45/ |
Single Clip Mount for DIN 3 Rail | $2.53 | 30 | $75.90 | https://www.mcmaster.com/8961K28/ |
Clear Plexiglass 18"x36" (1/4") | $32.10 | 1 | $32.10 | https://a.co/d/chkxqMH |
Software
We intend to use Rocky Linux 9.3 as the operating system, the GNU Compiler Collection for compiling C/C++ programs, Slurm for job scheduling, MPICH as our MPI implementation, OpenBLAS as our BLAS implementation, and Spack for package management.
Team Details
Our team is comprised of six undergraduates that are members of the KU Supercomputing Club, three of which have competed in the 2023 Student Cluster Competition (denoted by *). Below is the team roster with everyone's background and our intended strategy for each task.
Name | Background |
---|---|
Abir Haque* | Parallel Programming, Scientific Computing |
Owen Krussow | HPC System Administration, Parallel Programming, Circuits |
Sam Lindsey | Computer Aided Design, Embedded Systems |
Yara Al-Shorman* | Circuits, Computer Architecture, Embedded Systems |
Shad Ahmed* | Computer Architecture, Embedded Systems |
Jamie King | System Administration, Embedded Systems, Cybersecurity |
Aalborg Universit / Aalborg Supercomputer Klub
Hardware
Item | Quantity | Purpose |
---|---|---|
Raxda Rock 5b | 16 | compute nodes |
Raspberry Pi 4B | 1 | dhcp server |
USW-24-POE | 1 | 95W Switch |
Noctua NF-A12 | 2 | cooling |
MEAN WELL 300W Switch-mode powersupply | 1 | PSU |
Software
- OS: Armbian 24.5.0-trunk.417 bookworm (Linux 6.8.5-edge-rockchip-rk3588)
- BLAS: Blis mainline (
a316d2
) - MPI: MPICH
3.4.2
- HPL: HPL
2.3
- HPCG: HPCG Reference
3.1
Team
Member |
---|
Sofie Finnes Øvrelid |
Mads Beyer Mogensen |
Thomas Møller Jensen |
Tobias Sønder Nielsen |
Schedule - based off any given teams local time
Results and announcements will be done in Pacific Standard time.
Start time/date | End time/Date | Cart Title | Cart Detail | Duration |
---|---|---|---|---|
Thursday 4/18/24 | Setup Day | 8:00 am to 5:00 pm | ||
7:00am | 10:00am | Setup Network/Power Infra | Need to come into the auditorium to properly setup all the switches + AV for remote teams | 3 hours |
8:00am | Open Doors | Competition Teams can come and begin setup | ||
8:00am | 5:00pm+ | Competition Teams set up and run preliminary tests. | It is okay to receive help at this point. Auditorium locks at 5pm but are allowed to be in the room as long as there is staff/committee accompanying. | |
Friday 4/19/24 | Competition Day 1 | 8:00 am to 5:00 pm | ||
8:00 am | 12:00 pm (Noon) | Benchmarks begin | 4 hours | |
12:00 pm | Benchmark Submissions | HPL and HPCG Runs | ||
12:00 pm | Mystery App Revealed | |||
12:00 pm | 5:00pm | Applications | Cube20 and Mystery App | |
Saturday 4/20/24 | Competition Day 2 | 8:00 am to 5:00 pm | ||
8:00 am | 3:00pm | Applications | Cube20 and Mystery App | |
3:00 pm | Final Submission | Submit the final results of apps | ||
3:00 pm | 5:00 pm | Tours/Campus | Possible datacenter tours if Ops is available | 2 hours |
5:00 pm | Results Announced |
For on site teams: Breakfast and Lunch is served at 8am and 1pm respectively daily
Remote Teams
Some Schedule Time Translations
Note: For final Results Announcements are done with respect to Pacific Standard Time.
Event | Time Zone | Local Time | PST time |
---|---|---|---|
Aalborg Universitet | |||
Comp Day 1 | CEST | Friday 11am - 8pm | Friday 2am - 11am |
Comp Day 2 | CEST | Saturday 11am - 8pm | Saturday 2am - 11am |
Benchmarking | CEST | Friday 11am - 3pm | Friday 2am - 6am |
Final Benchmark Submission | CEST | Friday 3pm | Friday 6am |
Mystery App Announcement | CEST | Friday 3pm | Friday 6am |
Application Time | CEST | Friday 3:00pm - 8:00pm + Saturday 11:00 am - 6:00pm | Friday 6:00am - 11am + Saturday 2am - 11 am |
Final Application Submission | CEST | Saturday 8pm | Saturday 11am |
Results Announcements | CEST | Sunday 0:00am | 5pm |
University of Kansas | |||
Comp Day 1 | CST | Friday 10am - 7pm | Friday 8am - Friday 5pm |
Comp Day 2 | CST | Saturday 10am - 7pm | Saturday 8am - Saturday 5pm |
Benchmarking | CST | Friday 10am - 2pm | Friday 8am - 12am |
Final Benchmark Submission | CST | Friday 2pm | Friday 12am |
Mystery App Announcement | CST | Friday 2pm | Friday 12am |
Application Time | CST | Saturday 2:00pm - 7:00pm + Sunday 10am - 7:00pm | Saturday 12:00am - 5:00pm + Sunday 8am - 5:00pm |
Final Application Submission | CST | Saturday 5pm | Saturday 3pm |
Results Announcements | CST | Saturday 7pm | Saturday 5pm |
In person Teams
The complete duration of Thursday is for setting up your clusters.
You can speak to any outside sources and real people about the cluster and help up until 8am Friday.
You are always open to speak to other teams.
Setup instructions for onsite teams.
Networking
For networking pick a port off the judge switch. An IP will be leased to you
from SDSC. For NTP use the servers ntp1.ucsc.edu
and ntp01.nysernet.org
synced to UTC time.
Power
The power your cluster is pulling from will be on the PDU port closest to your cluster ports 1 or 6.
PDU info is behind our NAT and the IP to check PDU power usage is at our Grafana Dashbooard.
Access
Note you will not have access to your clusters after 5:00pm so plan ahead before the end of each individual day.
Remote Teams
Setup Instructions for remote teams:
General instructions
Please put a video on display of your overall setup and a view of your power monitor. You will be trusted that you will not be accessing your cluster remotely, but leave the cameras on if you can.
For the Aalborg team: Please follow these times as close as possible in your respective time zone. You will be receiving the information on the mystery app as well.
For KU: Expectation is that you will be on call and aligning your hours to the ones here on the West Coast, PDT. If this is an issue, let us know.
Application Announcements
For announcement of application specifications according to the calendar based on your local time zone. Please coordinate with Competition organizers.
Networking
Networking is however you see fit. Simply sync with your local NTP server using UTC and make sure the power monitoring for your cluster is synced to the same one. For reference on the in-person teams are doing time sync to UTC based off ntp1.ucsc.edu
and ntp01.nysernet.org
Power
For power, since you are remote, we expect you to have a log of your power and during submissions include a full log of your cluster's power consumption throughout the competition -- synced to the same NTP server as your cluster. If you need acomidations please pm one of the organizers.
Access
We're running on an honor code for remote teams to comply of not having access to their clusters after 5:00pm their time in fairness for the on-site teams.
General Submission Instructions
Submission Location
We shall be sending out Google Drive links for corresponding teams to upload their files -- you might want to look at rclone
to mount Google Drive onto your cluster
Submissions
Depending on time zones forward of Pacific Coast Time many of you will be either substantially into your setup day or just ahead of us. We would like to accept submissions for the benchmarks (should all be single text files) into the competition's Google Drive location, but we will only allow access each team's submission folder to members of that team. Please send email addresses that you want added to your teams' submission folder. We'll be adding the mentor's to start with.
Every team should have one they can access and only their teams.
Submission Folder Names
University of Kansas: KU
University of Texas, Austin: UT Austin
Aalborg Universitet: ASK
University of California, San Diego: UCSD
More application specific info will be released during competition Day 1.
file structure
EXAMPLE-SCHOOL/
├── Benchmarks/
├── Cube20/
└── Mystery/
Additional Info
For remote teams, We'd like for you also to submit files corresponding to the power consumption of your cluster for us to validate.
Depending on time zones forward of Pacific Coast Time many of you will be either substantially into your setup day or just ahead of us. We would like to accept submissions for the benchmarks (should all be single text files) into the competition's google-drive location, but we will only allow access each team's submission folder to members of that team. Please send email addresses that you want added to your teams' submission folder. We'll be adding the mentors to start with.
For benchmarks please upload a copy of your .dat files for both and the output file produced for both.
HPL
- Submit your *.dat file used for your submission run
- Submit your output, usually called HPL.out by default if
file
is your output specified in your*.dat
input.
HPCG
- Submit your *.dat file used for your submission run
- Submit the output and log files generated by your runs.
called
hpcg_log_[timestamp].txt.
- Official runs standardly run 30 min, but a 15+ min will be accepted to compensate for the short time.
- Problem size must occupy at least ¼ of total main memory
Example Submission
TEAM-NAME/
└── Benchmarks/
├── HPL.dat
├── HPL.out
├── hpcg.dat
├── HPCG-Benchmark[version]_[timestamp].txt
└── hpcg_log_[timestamp].txt
Do not submit files twice
For more questions please ask Francisco Gutierrez and Khai Vu, accessible in person or at ffgutierrez@ucsd and k6vu@ucsd.edu
Cube20
Instructions
A complete package and instructions of cube20 can be found on this Google Drive folder
Competition Input and Validation
For the Rubik's Cube application, you will be solving as many scrambles as you can from the input file attached during the competition. Please refer to the file UCSD_SBCC_2024_Rubiks_Cube.pdf
distributed earlier for submission guidelines. It's been proven that the maximum number of steps needed to solve any Rubik's cube scramble, otherwise known as God's Number, is 20. Only solutions with 20 or fewer steps will be awarded points. Since each step is composed of a letter, denoting a face of the cube, and a number, denoting the number of turns of that face, each solution should have a maximum of 40 characters. Here's your input, good luck!
For more questions please ask Benjamin Li, accessible in person or at li.ben002@gmail.com
Mystery Application
Surprise, there is not only 1 mystery application, there are 2!
Our first one being boinc. (which apparently stands for “Berkeley Open Infrastructure for Network Computing”, but I’m not gonna remember that acronym; its pronounced /bɔɪŋk/ – rhymes with "oink").
BOINe is a volunteer scientific computing project that allows users to perform computation for a variety of scientific computing projects – projects range from genome sequencing to numerical relativity to prime number calcululations. Take a look here for a list of all of the projects.
So actually, BOINC is not just one application, but a framework that lets you run any number of them – you get to pick the ones that seem most interesting to you!
Our Second Mystery App is Su2. While BOINC keeps some portion of your cluster occupied see if you can squeeze in a few SU2 runs. At many supercomputing centers to get access to the really big queues researchers have to demonstrate that their code will scale up too many many nodes. We make them produce a scaling test.
TOC
Boinc
You should look through the BOINC user manual for details, but I’ll give you some rough ideas of what to do. The goal is to get BOINC projects running on your cluster – the more computation you perform, the more you’ll be rewarded.
First, install BOINC. There are versions in the repositories of most major distros, but you might not want to use that version! Look at the user manual. This will give you the programs boinc_client
(which is the main “driver” program, that should be running in the background, also aliased to boinc
on some distros apparently), boinccmd
, and command-line interface to perform BOINC tasks, and boincmgr
, a GUI that does the same (if you do use this, you probably want to switch it to “View -> Advanced Mode”.
Then, to set up a BOINC project, the general approach is to go to the website of the project and make an account (boincstats.com can help manage multiple accounts for you). Then, in the boincmgr
GUI, you go to “Add Project”, put in your login credentials, and the project will start giving you tasks (assuming it has some to give), and start crunching! Keep an eye on the boinc_client
output – it is where any messages are logged.
Note that it is possible to use boincmgr
remotely, by following the instructions here. Alternatively, it is possible to use boinccmd
to run projects from the command line, and configure many other things.
You probably want to run the BOINC projects across several nodes in your cluster. This simply involves running the BOINC client on each of the nodes. It’s up to you how to arrange this – it’s possible to use the GUI boincmgr
, or use boinccmd
on each of the machines, or many more complicated options designed for clusters. How you do it is up to you!
Although it is possible to set up BOINC “teams” (i.e. groups of accounts whose credits get reported together), we suggest creating just one account for your team, and sharing the password (since you are all in the same place anyway). That’ll make keeping track of credits easier.
Things to think about
- Not all applications will run on all architectures – you should do some research on this (hint: the Science United page lists all of the ones that work on ARM).
- Run some benchmarks! These might give you some ideas of which applications
boinccmd --host localhost --passwd <password> --run_benchmarks
- This will print some information to the
boinc_client
log. These are the results I get on my laptop, but your single board computers will probably be much slower:
18-Apr-2024 16:42:46 [---] Benchmark results:
18-Apr-2024 16:42:46 [---] Number of CPUs: 16
18-Apr-2024 16:42:46 [---] 6135 floating point MIPS (Whetstone) per CPU
18-Apr-2024 16:42:46 [---] 70384 integer MIPS (Dhrystone) per CPU
- Note that the floating point MIPS (millions of instructions per second) and integer MIPS are not the same – think about your cluster’s architecture, and which projects might be better suited for it.
- Many applications will take a looong time to run. On my laptop, Einstein@HOME apps took ~6 hours, and World Community Grid’s Mapping Cancer apps took ~3 hours. You may want to do some research, or try several different applications before finding one that will be practical – make sure to manage your time and plan ahead!
- Incidentally, there’s cute OpenGL graphics (the “Show graphics” button in
boincmgr
) 😀
How it’ll be scored:
The BOINC project conveniently gives us a measure of credits, which they call “cobblestones”.
We will score you on the number of credits earned – the top scoring team will get the full 20 points, every team below that will get proportionately that many points. We’ll also give some (fixed) extra points just for getting BOINC running and starting to work on a project, even if you don’t successfully manage to finish a full “work unit”.
What to submit!
Ideally, you’d just tell us your team username for a site like BoincStats or ScienceUnited (both of which are “login managers” for all of the BOINC projects, and let you see a given user’s overall stats), and we could just see the total number of credits accrued. To make our lives even easier, you could upload a screenshot showing your total number of credits.
Unfortunately, I am aware that linking your accounts to these various websites is rather finicky (it took me an embarrassing amount of time), and I don’t need you wasting time on that kind of thing, so we are willing to accept the full output of boinccmd --get_project_status
(and we’ll just add up the relevant values ourselves).
Conclusion
And lastly, have fun!! You have the freedom to learn about projects in almost any area of science or mathematics you might be interested in! So go forth, learn about them, and come back and tell us all the cool stuff you found out!
For more questions please ask Ritoban Roy-Chowdhury accessible in person or at rroychowdhury@ucsd.edu pertaining to Boinc
Su2
While BOINC keeps some portion of your cluster occupied see if you can squeeze in a few SU2 runs. At many supercomputing centers to get access to the really big queues researchers have to demonstrate that their code will scale up too many many nodes. We make them produce a scaling test.
We would like you to run a number of the tutorial examples in SU2. The tutorials repository can be found here: https://github.com/su2code/Tutorials.git
We would like you to produce scaling tests for the following core-counts: 1,2,4,8,12,16,32,64,128
(stop when your cluster runs out of cores). If you have time fill in some of the gaps to keep populating the graph. If your cluster has GPUs you can do increments of whole GPUs (instead of individual cores).
The following scaling tests are required:
Inviscid_Bump
Inviscid_ONERAM6
Inviscid_Wedge
The application appears to have some methods for tracking time internally but your amazing (speaking for myself: not-so-amazing) committee couldn't make it work - welcome to computing - there's always some part that doesn't work just quite right. Extra credit awarded to anyone who gets the SCREEN_OUTPUT= WALL_TIME
working. For everyone else there is the Linux time
command or write a wrapper script around your launch command that snapshots the date command before and at the end of your runs.
The requirement is to produce a spreadsheet with the following columns:
total cores (-n) | nodes (-N) | runtime | command executed
(if a machines-file applies, upload and link)
Please provide a single upload of history.csv
, restart_flow.dat
and any *.vtu
files - they should all match for each run.
Following this you are required to run (one short, one long) and using a wrapper script that saves the output of the "data" command before the run and after the run completes.
Turbulent_ONERAM6
Turbulent_Flat_Plate
With the following modifications to the config file:
turb_SA_flatplate.cfg
% Epsilon to control the series convergence
CONV_CAUCHY_EPS= 1E-6
Change the above to read:
CONV_CAUCHY_EPS= 1E-5
turb_ONERAM6.cfg
% Epsilon to control the series convergence
CONV_CAUCHY_EPS= 1E-6
Change the above to read:
CONV_CAUCHY_EPS= 1E-12
upload of history.csv
, restart_flow.dat
and any *.vtu
files and provide your timing via whatever method you use to capture it (if its using "time
" do not forget to actually add before you start a serious run - don't ask us how we know).
None of the scaling-test runs (Inviscid_[Bump|Wedge|ONERAM6])
should take longer than 30mins so if your solutions are not ever converging consider an error and rather cancel a job than end up with jobs that never converge. Half the points are awarded for fully populating the curve no matter how fast your jobs run. Extra points can be scored for noting any of the artifacts that we expect to find and a brief comment on why you think that they occur.
Finally, the turbulence flows will award points for fastest time to convergence (while using the specified convergence values above). If you are forced to cancel either of the turbulence runs early for any reason - saving out the last line, specifically the iteration number along with your time
e.g:
| 1095| 7.7166e-01| -8.955272| -10.731343| 0.251469| 0.015833|
^this^
would get you partial credit (higher number is better) if no team completes these runs.
Note: that for all the turbulence simulations specifically turb_SA_flatplate.cfg
and turb_SA_ONERAM6.cfg
you simply need to run them once with the above information attached -- It should give you an idea of how Su2 should run. For the scalability tests you need to run Inviscid_Bump
, Inviscid_ONERAM6
and Inviscid_Wedge
.
For more questions please ask Nick Thorne accessible in person or at nthorne@tacc.utexas.edu pertaining to Su2
Grading Breakdown
The overall score for any individual team will be measured as follows with individual application scores being weighted against the team who scores the highest for that given application, i.e. if the highest benchmark result for HPCG is 160 GFLOP/s the scores for every other team will be 10*(their score)/ (160 GFLOP/s).
Application | Weight | Total |
---|---|---|
HPL | 16.67% | 10 |
HPCG | 16.67% | 10 |
Cube20 | 33.33% | 20 |
Mystery App | 33.33% | 20 |
Total | 60 |
SBCC24 Final Results
Overall Leaderboard
Rank | Team |
---|---|
1 | Aalborg |
2 | Kansas |
3 | Texas |
4 | San Diego |
HPL Leaderboard
Rank | Team | Gflops |
---|---|---|
1 | Aalborg | 6.81E+02 |
2 | Kansas | 3.41E+02 |
3 | Texas | 3.18E+01 |
4 | San Diego | 3.57E+00 |
HPCG Leaderboard
Rank | Team | Gflops |
---|---|---|
1 | Kansas | 25.182 |
2 | Aalborg | 24.5444 |
3 | San Diego | 3.17678 |
4 | Texas | 2.41697 |
Cube20 Leaderboard
Rank | Team | Raw Score |
---|---|---|
1 | Kansas | 2058724 |
1 | San Diego | 2058724 |
1 | Aalborg | 2058724 |
2 | Texas | 2058720 |
Mystery Leaderboard
Rank | Team | Cobblestones |
---|---|---|
1 | Aalborg | 59,271,270,805.56 |
2 | Kansas | 1256.195928 |
3 | San Diego | ~2.76 |
4 | Texas |
Score Breakdowns
Totals
note that the su2 column is not included in the sum
Team | HPL | HPCG | Cube20 | BOINC | SU2 | Total | Score % /60pts |
---|---|---|---|---|---|---|---|
UT | 0.4671414935 | 0.9598006513 | 19.99996114 | 5 | 0 | 26.42690329 | 44.04% |
KU | 5.006681842 | 10 | 20 | 12 | 10 | 47.00668184 | 78.34% |
UCSD | 0.05243850503 | 1.261528076 | 20 | 7 | 3.333333333 | 28.31396658 | 47.19% |
Aalborg(ASK) | 10 | 9.746803272 | 20 | 20 | 12.85714286 | 59.74680327 | 99.58% |
HPL
Team | HPL | Input | GFLOPS | Relative Percent | Final Scaled Score |
---|---|---|---|---|---|
UT | WR11C2R4 | 8000 192 8 10 10.73 | 3.18E+01 | 4.67% | 0.4671414935 |
KU | WR11C2R4 | 91392 128 5 8 1492.74 | 3.41E+02 | 50.07% | 5.006681842 |
UCSD | WR11C2R4 | 5000 4 2 8 23.35 | 3.57E+00 | 0.52% | 0.05243850503 |
Aalborg | WR00L2L4 | 113408 192 4 16 1428.02 | 6.81E+02 | 100.00% | 10 |
TOP | 6.81E+02 | 100% | 10 |
HPCG
Team | GFLOPS | Relative Percent | Final Scaled Score |
---|---|---|---|
UT | 2.41697 | 0.09598006513 | 0.9598006513 |
KU | 25.182 | 1 | 10 |
UCSD | 3.17678 | 0.1261528076 | 1.261528076 |
Aalborg | 24.5444 | 0.9746803272 | 9.746803272 |
TOP | 25.182 | 100% | 10 |
Cube20
Team | Raw Score | Normalize Score | Final Scaled Score (20pts) |
---|---|---|---|
UT | 2058720 | 0.999998057 | 19.99996114 |
KU | 2058724 | 1 | 20 |
UCSD | 2058724 | 1 | 20 |
Aalborg | 2058724 | 1 | 20 |
Top Raw: | 2058724 |
Boinc
tldr: it was real messy
Team | Total Credits | Weighted Points | Normalize Score | Final Scaled Score (20pts) | |||
---|---|---|---|---|---|---|---|
UT | They got it setup and started running, but no "fraction done" numbers in the submission? | 0 | 5 | ||||
KU | 1256.195928 | 1256.195928 | 0.00000002119401037 | 12 | |||
UCSD | Unfortunately did not complete any tasks, but partial completion of several -- equivalent to approximately 2.76 tasks | 0 | 7 | ||||
AAU | 55502843941 | 109586364.5 | 3658840500 | 59,271,270,805.56 | 1 | 20 |
Su2
this was also messy
scores for this one aren't included in the final score
Team | Scaling: InvBump | Scaling: OneRAM6 | Scaling: Wedge | Speed: Turb_ONERAM6 | Rank | Speed: Turb_Flat_Plate | Rank | Raw Score | Final Scaled Score (20pts) |
---|---|---|---|---|---|---|---|---|---|
UT | no | no | no | 0 | 4 | 0 | 4 | 0 | 0 |
KU | yes | yes | yes | 98.736 | 2 | 648 | 2 | 2 | 10 |
UCSD | yes(OOM) | yes | yes | 7440 | 3 | 7920 | 3 | 1.2 | 3.333333333 |
Aalborg | yes | yes | yes | 3 | 1 | 8 | 1 | 2.8 | 12.85714286 |