Welcome to the Single Board Cluster Competition 2024.

This site will contain all relevant info for competitors in terms of submissions and other logistics. For time-based announcements this site will reflect it when the time rolls over according to PST -- teams in forward time zones will be notified of specifics according to their local timezone.

TOC

Back To TOC

Welcome to the SBCC24

The Single-Board Cluster Competition (SBCC) is a competition where teams from all around the world compete using single-board devices, and other similarly simple hardware, to create miniature supercomputing clusters. SBCC24 is the second competition. For information about SBCC23, check here: NeedLink

For SBCC24 we have two in-person teams, and two remote teams.

  • Opening Remarks: Mary Thomas and Nick Thorne
  • Competition Rules and Overview (Paco)

Back To TOC

Teams

MODEORGLOCATIONSYSTEMTEAM MEMBERS
In-PersonUCSDLa Jolla, CAPI3 cluster, 20 nodesZixian Wang, Aarush Mehrotra, Henry Feng, Luiz Gurrola, James Choi, Pranav Prabu
In-PersonTACCAustin, TXPI4 cluster, 20 nodesJulian Mace, Kimberly Balboa, Yaritza Kenyon, Erik Rivera-sanchez, Jacob Getz,
VirtualKULawrence, KSOrange Pi 5, 28 nodesAbir Haque, Owen Krussow, Sam Lindsey, Yara Al-Shormanm, Shad Ahmed, Jamie King
VirtualAAUAalborg, DK17x Raxda RockSofie Finnes Øvrelid, Mads Beyer Mogensen, Thomas Møller Jensen, Tobias Sønder Nielsen

Back To TOC

Schedule

Agenda:

DAYSTARTENDACTIVITY
Thurs8:00 am PST5:00 pm PSTSetup
Friday8:00 am PST12:00 pm PSTBenchmarking
Friday12:00 am PST5:00 pm PSTCompetition Begins - Applications
Saturday8:00 am PST3:00 pm PSTFinal Submissions Due
Saturday5:00 pm PSTAwards Ceremony

Here is the FULL schedule

Back To TOC

San Diego Super Computing Center / University of California, San Diego

Hardware

ItemQuantity
Raspberry Pi 3 Model B Rev 1.320
Unify Standard 483

Software

Rocky Linux 9.3 (Blue Onyx)

System Software:

  • BeeGFS, Slurm, Spack

Compilers & Libraries:

  • GCC, OpenMPI, OpenBLAS

Setup Tools:

  • Ansible

Team Details

Zixian Wang is a third-year student majoring in Computer Science. He is doing NLP research in scaling-laws for chain-of-thought as well as mixture-of-experts models. Experienced in training and inferencing models on cutting edge systems with H100, A100, MI250, MI210, with a paper on MLPerf under review.

Aarush Mehrotra is a second-year UCSD student double majoring in Math-CS and Economics. He is interested in the intersection of high-performance computing and finance. He has experience in MLPerf, machine learning & AI, data science, and algorithmic trading.

Henry Feng is a third year UCSD student majoring in Computer Engineering. He is interested in machine learning and AI, and also hardware design.

Luiz Gurrola is a second year student at UCSD majoring in Math-CS with a minor in Data Science. He is interested in machine learning and AI as well as computer architecture. He has experience in programming and setting up systems.

James Choi is a second year student at UCSD majoring in Math-CS. He is interested in machine learning and applications of XR and blockchain technology. He has experience in Java and Full Stack development.

Pranav Prabu is a second year student at UCSD majoring in Computer Science. He is interested in machine learning, in applications like natural language processing and computer vision. He has experience with computer vision, machine learning, and system development.

Texas Advanced Computing Center / University of Texas, Austin

Hardware

ItemQuantity
Pi 5 boards20
TP-Link TL-SG1024D1
Anker 60 Watt Hubs5
UCTRONICS Upgraded Complete Enclosure5

Software

  • System: Raspbian 64 bit
  • MPI: MPICH
  • Interfacing: Clush

Team

NameInstitutionRole
Julian MaceUniversity of Texas
Kimberly BalboaTexas State University
Yaritza KenyonTexas State University
Erik Rivera-sanchezTexas State University
Jacob GetzTACCMentor

University of Kansas

Hardware

ItemCost Per ItemNo. ItemsTotalCost Link
Power Supply$43.991$43.99https://a.co/d/aef7134
Ethernet Switch$89.991$89.99https://a.co/d/80uFMTU
Power Strip$8.991$8.50https://a.co/d/9DdrLzO
Power Distribution Board$38.003$114.00https://a.co/d/7JitnBU
Power Monitor$15.991$15.99https://a.co/d/8LErGh1
64GB MicroSD Card (30-pack)$139.501$134.53https://www.aliexpress.us/item/2251832790078323.html
Orange Pi 5+$179.995$899.95https://a.co/d/9byszpf
Orange Pi 5$137.9917$2,345.83https://a.co/d/hcaDbo4
Ethernet cable (grey,100ft)$60.001$60.00https://www.mcmaster.com/8245K31-8245K14/
M4 Hex Head Screws (100 ct.)$12.582$25.16https://www.mcmaster.com/91239A148/
14AWG supply wires (black,50ft)$22.482$22.48https://www.mcmaster.com/8054T17-8054T378/
24AWG supply wires (orange,50ft)$10.871$10.87https://www.mcmaster.com/8054T12/
24AWG supply wires (black,50ft)$10.871$10.87https://www.mcmaster.com/8054T12/
Ethernet RJ45 crimp-on connector$9.3410$93.40https://www.mcmaster.com/68995K67/
Heatsinks$7.9915$119.85https://a.co/d/imQvaOH
Din Rails 7.5mm depth$6.303$18.90https://www.mcmaster.com/8961K45/
Single Clip Mount for DIN 3 Rail$2.5330$75.90https://www.mcmaster.com/8961K28/
Clear Plexiglass 18"x36" (1/4")$32.101$32.10https://a.co/d/chkxqMH

Software

We intend to use Rocky Linux 9.3 as the operating system, the GNU Compiler Collection for compiling C/C++ programs, Slurm for job scheduling, MPICH as our MPI implementation, OpenBLAS as our BLAS implementation, and Spack for package management.

Team Details

Our team is comprised of six undergraduates that are members of the KU Supercomputing Club, three of which have competed in the 2023 Student Cluster Competition (denoted by *). Below is the team roster with everyone's background and our intended strategy for each task.

NameBackground
Abir Haque*Parallel Programming, Scientific Computing
Owen KrussowHPC System Administration, Parallel Programming, Circuits
Sam LindseyComputer Aided Design, Embedded Systems
Yara Al-Shorman*Circuits, Computer Architecture, Embedded Systems
Shad Ahmed*Computer Architecture, Embedded Systems
Jamie KingSystem Administration, Embedded Systems, Cybersecurity

Aalborg Universit / Aalborg Supercomputer Klub

Hardware

ItemQuantityPurpose
Raxda Rock 5b16compute nodes
Raspberry Pi 4B1dhcp server
USW-24-POE195W Switch
Noctua NF-A122cooling
MEAN WELL 300W Switch-mode powersupply1PSU

Software

  • OS: Armbian 24.5.0-trunk.417 bookworm (Linux 6.8.5-edge-rockchip-rk3588)
  • BLAS: Blis mainline (a316d2)
  • MPI: MPICH 3.4.2
  • HPL: HPL 2.3
  • HPCG: HPCG Reference 3.1

Team

Member
Sofie Finnes Øvrelid
Mads Beyer Mogensen
Thomas Møller Jensen
Tobias Sønder Nielsen

Schedule - based off any given teams local time

Results and announcements will be done in Pacific Standard time.

Start time/dateEnd time/DateCart TitleCart DetailDuration
Thursday 4/18/24Setup Day8:00 am to 5:00 pm
7:00am10:00amSetup Network/Power InfraNeed to come into the auditorium to properly setup all the switches + AV for remote teams3 hours
8:00amOpen DoorsCompetition Teams can come and begin setup
8:00am5:00pm+Competition Teams set up and run preliminary tests.It is okay to receive help at this point. Auditorium locks at 5pm but are allowed to be in the room as long as there is staff/committee accompanying.
Friday 4/19/24Competition Day 18:00 am to 5:00 pm
8:00 am12:00 pm (Noon)Benchmarks begin4 hours
12:00 pmBenchmark SubmissionsHPL and HPCG Runs
12:00 pmMystery App Revealed
12:00 pm5:00pmApplicationsCube20 and Mystery App
Saturday 4/20/24Competition Day 28:00 am to 5:00 pm
8:00 am3:00pmApplicationsCube20 and Mystery App
3:00 pmFinal SubmissionSubmit the final results of apps
3:00 pm5:00 pmTours/CampusPossible datacenter tours if Ops is available2 hours
5:00 pmResults Announced

For on site teams: Breakfast and Lunch is served at 8am and 1pm respectively daily

Remote Teams

Some Schedule Time Translations

Note: For final Results Announcements are done with respect to Pacific Standard Time.

EventTime ZoneLocal TimePST time
Aalborg Universitet
Comp Day 1CESTFriday 11am - 8pmFriday 2am - 11am
Comp Day 2CESTSaturday 11am - 8pmSaturday 2am - 11am
BenchmarkingCESTFriday 11am - 3pmFriday 2am - 6am
Final Benchmark SubmissionCESTFriday 3pmFriday 6am
Mystery App AnnouncementCESTFriday 3pmFriday 6am
Application TimeCESTFriday 3:00pm - 8:00pm + Saturday 11:00 am - 6:00pmFriday 6:00am - 11am + Saturday 2am - 11 am
Final Application SubmissionCESTSaturday 8pmSaturday 11am
Results AnnouncementsCESTSunday 0:00am5pm
University of Kansas
Comp Day 1CSTFriday 10am - 7pmFriday 8am - Friday 5pm
Comp Day 2CSTSaturday 10am - 7pmSaturday 8am - Saturday 5pm
BenchmarkingCSTFriday 10am - 2pmFriday 8am - 12am
Final Benchmark SubmissionCSTFriday 2pmFriday 12am
Mystery App AnnouncementCSTFriday 2pmFriday 12am
Application TimeCSTSaturday 2:00pm - 7:00pm + Sunday 10am - 7:00pmSaturday 12:00am - 5:00pm + Sunday 8am - 5:00pm
Final Application SubmissionCSTSaturday 5pmSaturday 3pm
Results AnnouncementsCSTSaturday 7pmSaturday 5pm

In person Teams

The complete duration of Thursday is for setting up your clusters.

You can speak to any outside sources and real people about the cluster and help up until 8am Friday.

You are always open to speak to other teams.

Setup instructions for onsite teams.

Networking

For networking pick a port off the judge switch. An IP will be leased to you from SDSC. For NTP use the servers ntp1.ucsc.edu and ntp01.nysernet.org synced to UTC time.

Power

The power your cluster is pulling from will be on the PDU port closest to your cluster ports 1 or 6.

PDU info is behind our NAT and the IP to check PDU power usage is at our Grafana Dashbooard.

Access

Note you will not have access to your clusters after 5:00pm so plan ahead before the end of each individual day.

Remote Teams

Setup Instructions for remote teams:

General instructions

Please put a video on display of your overall setup and a view of your power monitor. You will be trusted that you will not be accessing your cluster remotely, but leave the cameras on if you can.

For the Aalborg team: Please follow these times as close as possible in your respective time zone. You will be receiving the information on the mystery app as well.

For KU: Expectation is that you will be on call and aligning your hours to the ones here on the West Coast, PDT. If this is an issue, let us know.

Application Announcements

For announcement of application specifications according to the calendar based on your local time zone. Please coordinate with Competition organizers.

Networking

Networking is however you see fit. Simply sync with your local NTP server using UTC and make sure the power monitoring for your cluster is synced to the same one. For reference on the in-person teams are doing time sync to UTC based off ntp1.ucsc.edu and ntp01.nysernet.org

Power

For power, since you are remote, we expect you to have a log of your power and during submissions include a full log of your cluster's power consumption throughout the competition -- synced to the same NTP server as your cluster. If you need acomidations please pm one of the organizers.

Access

We're running on an honor code for remote teams to comply of not having access to their clusters after 5:00pm their time in fairness for the on-site teams.

General Submission Instructions

Submission Location

We shall be sending out Google Drive links for corresponding teams to upload their files -- you might want to look at rclone to mount Google Drive onto your cluster

Submissions

Depending on time zones forward of Pacific Coast Time many of you will be either substantially into your setup day or just ahead of us. We would like to accept submissions for the benchmarks (should all be single text files) into the competition's Google Drive location, but we will only allow access each team's submission folder to members of that team. Please send email addresses that you want added to your teams' submission folder. We'll be adding the mentor's to start with.

Every team should have one they can access and only their teams.

Submission Folder Names

University of Kansas: KU

University of Texas, Austin: UT Austin

Aalborg Universitet: ASK

University of California, San Diego: UCSD

More application specific info will be released during competition Day 1.

file structure

EXAMPLE-SCHOOL/
├── Benchmarks/
├── Cube20/
└── Mystery/

Additional Info

For remote teams, We'd like for you also to submit files corresponding to the power consumption of your cluster for us to validate.

Depending on time zones forward of Pacific Coast Time many of you will be either substantially into your setup day or just ahead of us. We would like to accept submissions for the benchmarks (should all be single text files) into the competition's google-drive location, but we will only allow access each team's submission folder to members of that team. Please send email addresses that you want added to your teams' submission folder. We'll be adding the mentors to start with.

For benchmarks please upload a copy of your .dat files for both and the output file produced for both.

HPL

  • Submit your *.dat file used for your submission run
  • Submit your output, usually called HPL.out by default if file is your output specified in your *.dat input.

HPCG

  • Submit your *.dat file used for your submission run
  • Submit the output and log files generated by your runs. called hpcg_log_[timestamp].txt.
  • Official runs standardly run 30 min, but a 15+ min will be accepted to compensate for the short time.
  • Problem size must occupy at least ¼ of total main memory

Example Submission

TEAM-NAME/
    └── Benchmarks/
        ├── HPL.dat
        ├── HPL.out
        ├── hpcg.dat
        ├── HPCG-Benchmark[version]_[timestamp].txt
        └── hpcg_log_[timestamp].txt

Do not submit files twice


For more questions please ask Francisco Gutierrez and Khai Vu, accessible in person or at ffgutierrez@ucsd and k6vu@ucsd.edu

Cube20

Instructions

A complete package and instructions of cube20 can be found on this Google Drive folder

Competition Input and Validation

For the Rubik's Cube application, you will be solving as many scrambles as you can from the input file attached during the competition. Please refer to the file UCSD_SBCC_2024_Rubiks_Cube.pdf distributed earlier for submission guidelines. It's been proven that the maximum number of steps needed to solve any Rubik's cube scramble, otherwise known as God's Number, is 20. Only solutions with 20 or fewer steps will be awarded points. Since each step is composed of a letter, denoting a face of the cube, and a number, denoting the number of turns of that face, each solution should have a maximum of 40 characters. Here's your input, good luck!


For more questions please ask Benjamin Li, accessible in person or at li.ben002@gmail.com

Mystery Application

Surprise, there is not only 1 mystery application, there are 2!

Our first one being boinc. (which apparently stands for “Berkeley Open Infrastructure for Network Computing”, but I’m not gonna remember that acronym; its pronounced /bɔɪŋk/ – rhymes with "oink").

BOINe is a volunteer scientific computing project that allows users to perform computation for a variety of scientific computing projects – projects range from genome sequencing to numerical relativity to prime number calcululations. Take a look here for a list of all of the projects.

So actually, BOINC is not just one application, but a framework that lets you run any number of them – you get to pick the ones that seem most interesting to you!

Our Second Mystery App is Su2. While BOINC keeps some portion of your cluster occupied see if you can squeeze in a few SU2 runs. At many supercomputing centers to get access to the really big queues researchers have to demonstrate that their code will scale up too many many nodes. We make them produce a scaling test.

TOC

Boinc

You should look through the BOINC user manual for details, but I’ll give you some rough ideas of what to do. The goal is to get BOINC projects running on your cluster – the more computation you perform, the more you’ll be rewarded.

First, install BOINC. There are versions in the repositories of most major distros, but you might not want to use that version! Look at the user manual. This will give you the programs boinc_client (which is the main “driver” program, that should be running in the background, also aliased to boinc on some distros apparently), boinccmd, and command-line interface to perform BOINC tasks, and boincmgr, a GUI that does the same (if you do use this, you probably want to switch it to “View -> Advanced Mode”.

Then, to set up a BOINC project, the general approach is to go to the website of the project and make an account (boincstats.com can help manage multiple accounts for you). Then, in the boincmgr GUI, you go to “Add Project”, put in your login credentials, and the project will start giving you tasks (assuming it has some to give), and start crunching! Keep an eye on the boinc_client output – it is where any messages are logged.

Note that it is possible to use boincmgr remotely, by following the instructions here. Alternatively, it is possible to use boinccmd to run projects from the command line, and configure many other things.

You probably want to run the BOINC projects across several nodes in your cluster. This simply involves running the BOINC client on each of the nodes. It’s up to you how to arrange this – it’s possible to use the GUI boincmgr, or use boinccmd on each of the machines, or many more complicated options designed for clusters. How you do it is up to you!

Although it is possible to set up BOINC “teams” (i.e. groups of accounts whose credits get reported together), we suggest creating just one account for your team, and sharing the password (since you are all in the same place anyway). That’ll make keeping track of credits easier.

Things to think about

  • Not all applications will run on all architectures – you should do some research on this (hint: the Science United page lists all of the ones that work on ARM).
  • Run some benchmarks! These might give you some ideas of which applications
    • boinccmd --host localhost --passwd <password> --run_benchmarks
    • This will print some information to the boinc_client log. These are the results I get on my laptop, but your single board computers will probably be much slower:
18-Apr-2024 16:42:46 [---] Benchmark results:
18-Apr-2024 16:42:46 [---]	Number of CPUs: 16
18-Apr-2024 16:42:46 [---]	6135 floating point MIPS (Whetstone) per CPU
18-Apr-2024 16:42:46 [---]	70384 integer MIPS (Dhrystone) per CPU
  • Note that the floating point MIPS (millions of instructions per second) and integer MIPS are not the same – think about your cluster’s architecture, and which projects might be better suited for it.
  • Many applications will take a looong time to run. On my laptop, Einstein@HOME apps took ~6 hours, and World Community Grid’s Mapping Cancer apps took ~3 hours. You may want to do some research, or try several different applications before finding one that will be practical – make sure to manage your time and plan ahead!
  • Incidentally, there’s cute OpenGL graphics (the “Show graphics” button in boincmgr) 😀

How it’ll be scored:

The BOINC project conveniently gives us a measure of credits, which they call “cobblestones”.

We will score you on the number of credits earned – the top scoring team will get the full 20 points, every team below that will get proportionately that many points. We’ll also give some (fixed) extra points just for getting BOINC running and starting to work on a project, even if you don’t successfully manage to finish a full “work unit”.

What to submit!

Ideally, you’d just tell us your team username for a site like BoincStats or ScienceUnited (both of which are “login managers” for all of the BOINC projects, and let you see a given user’s overall stats), and we could just see the total number of credits accrued. To make our lives even easier, you could upload a screenshot showing your total number of credits.

Unfortunately, I am aware that linking your accounts to these various websites is rather finicky (it took me an embarrassing amount of time), and I don’t need you wasting time on that kind of thing, so we are willing to accept the full output of boinccmd --get_project_status (and we’ll just add up the relevant values ourselves).

Conclusion

And lastly, have fun!! You have the freedom to learn about projects in almost any area of science or mathematics you might be interested in! So go forth, learn about them, and come back and tell us all the cool stuff you found out!
For more questions please ask Ritoban Roy-Chowdhury accessible in person or at rroychowdhury@ucsd.edu pertaining to Boinc

Back to Top

Su2

While BOINC keeps some portion of your cluster occupied see if you can squeeze in a few SU2 runs. At many supercomputing centers to get access to the really big queues researchers have to demonstrate that their code will scale up too many many nodes. We make them produce a scaling test.

We would like you to run a number of the tutorial examples in SU2. The tutorials repository can be found here: https://github.com/su2code/Tutorials.git

We would like you to produce scaling tests for the following core-counts: 1,2,4,8,12,16,32,64,128 (stop when your cluster runs out of cores). If you have time fill in some of the gaps to keep populating the graph. If your cluster has GPUs you can do increments of whole GPUs (instead of individual cores).

The following scaling tests are required:

Inviscid_Bump
Inviscid_ONERAM6
Inviscid_Wedge

The application appears to have some methods for tracking time internally but your amazing (speaking for myself: not-so-amazing) committee couldn't make it work - welcome to computing - there's always some part that doesn't work just quite right. Extra credit awarded to anyone who gets the SCREEN_OUTPUT= WALL_TIME working. For everyone else there is the Linux time command or write a wrapper script around your launch command that snapshots the date command before and at the end of your runs.

The requirement is to produce a spreadsheet with the following columns:

total cores (-n) | nodes (-N) | runtime | command executed

(if a machines-file applies, upload and link)

Please provide a single upload of history.csv, restart_flow.dat and any *.vtu files - they should all match for each run.

Following this you are required to run (one short, one long) and using a wrapper script that saves the output of the "data" command before the run and after the run completes.

Turbulent_ONERAM6
Turbulent_Flat_Plate
With the following modifications to the config file:

      turb_SA_flatplate.cfg
      % Epsilon to control the series convergence
      CONV_CAUCHY_EPS= 1E-6
      Change the above to read:
      CONV_CAUCHY_EPS= 1E-5

      turb_ONERAM6.cfg
      % Epsilon to control the series convergence
      CONV_CAUCHY_EPS= 1E-6
      Change the above to read:
      CONV_CAUCHY_EPS= 1E-12

upload of history.csv, restart_flow.dat and any *.vtu files and provide your timing via whatever method you use to capture it (if its using "time" do not forget to actually add before you start a serious run - don't ask us how we know).

None of the scaling-test runs (Inviscid_[Bump|Wedge|ONERAM6]) should take longer than 30mins so if your solutions are not ever converging consider an error and rather cancel a job than end up with jobs that never converge. Half the points are awarded for fully populating the curve no matter how fast your jobs run. Extra points can be scored for noting any of the artifacts that we expect to find and a brief comment on why you think that they occur.

Finally, the turbulence flows will award points for fastest time to convergence (while using the specified convergence values above). If you are forced to cancel either of the turbulence runs early for any reason - saving out the last line, specifically the iteration number along with your time

e.g:

|        1095|  7.7166e-01|   -8.955272|  -10.731343|    0.251469|    0.015833|
       ^this^

would get you partial credit (higher number is better) if no team completes these runs.

Note: that for all the turbulence simulations specifically turb_SA_flatplate.cfg and turb_SA_ONERAM6.cfg you simply need to run them once with the above information attached -- It should give you an idea of how Su2 should run. For the scalability tests you need to run Inviscid_Bump, Inviscid_ONERAM6 and Inviscid_Wedge.


For more questions please ask Nick Thorne accessible in person or at nthorne@tacc.utexas.edu pertaining to Su2

Back to Top

Grading Breakdown

The overall score for any individual team will be measured as follows with individual application scores being weighted against the team who scores the highest for that given application, i.e. if the highest benchmark result for HPCG is 160 GFLOP/s the scores for every other team will be 10*(their score)/ (160 GFLOP/s).

ApplicationWeightTotal
HPL16.67%10
HPCG16.67%10
Cube2033.33%20
Mystery App33.33%20
Total60

SBCC24 Final Results

Overall Leaderboard

RankTeam
1Aalborg
2Kansas
3Texas
4San Diego

HPL Leaderboard

RankTeamGflops
1Aalborg6.81E+02
2Kansas3.41E+02
3Texas3.18E+01
4San Diego3.57E+00

HPCG Leaderboard

RankTeamGflops
1Kansas25.182
2Aalborg24.5444
3San Diego3.17678
4Texas2.41697

Cube20 Leaderboard

RankTeamRaw Score
1Kansas2058724
1San Diego2058724
1Aalborg2058724
2Texas2058720

Mystery Leaderboard

RankTeamCobblestones
1Aalborg59,271,270,805.56
2Kansas1256.195928
3San Diego~2.76
4Texas

Score Breakdowns

Totals

note that the su2 column is not included in the sum

TeamHPLHPCGCube20BOINCSU2TotalScore % /60pts
UT0.46714149350.959800651319.999961145026.4269032944.04%
KU5.0066818421020121047.0066818478.34%
UCSD0.052438505031.2615280762073.33333333328.3139665847.19%
Aalborg(ASK)109.746803272202012.8571428659.7468032799.58%

HPL

TeamHPLInputGFLOPSRelative PercentFinal Scaled Score
UTWR11C2R48000 192 8 10 10.733.18E+014.67%0.4671414935
KUWR11C2R491392 128 5 8 1492.743.41E+0250.07%5.006681842
UCSDWR11C2R45000 4 2 8 23.353.57E+000.52%0.05243850503
AalborgWR00L2L4113408 192 4 16 1428.026.81E+02100.00%10
TOP6.81E+02100%10

HPCG

TeamGFLOPSRelative PercentFinal Scaled Score
UT2.416970.095980065130.9598006513
KU25.182110
UCSD3.176780.12615280761.261528076
Aalborg24.54440.97468032729.746803272
TOP25.182100%10

Cube20

TeamRaw ScoreNormalize ScoreFinal Scaled Score (20pts)
UT20587200.99999805719.99996114
KU2058724120
UCSD2058724120
Aalborg2058724120
Top Raw:2058724

Boinc

tldr: it was real messy

TeamTotal CreditsWeighted PointsNormalize ScoreFinal Scaled Score (20pts)
UTThey got it setup and started running, but no "fraction done" numbers in the submission?05
KU1256.1959281256.1959280.0000000211940103712
UCSDUnfortunately did not complete any tasks, but partial completion of several -- equivalent to approximately 2.76 tasks07
AAU55502843941109586364.5365884050059,271,270,805.56120

Su2

this was also messy

scores for this one aren't included in the final score

TeamScaling: InvBumpScaling: OneRAM6Scaling: WedgeSpeed: Turb_ONERAM6RankSpeed: Turb_Flat_PlateRankRaw ScoreFinal Scaled Score (20pts)
UTnonono040400
KUyesyesyes98.73626482210
UCSDyes(OOM)yesyes74403792031.23.333333333
Aalborgyesyesyes31812.812.85714286