Let’s say you’re a researcher with terabytes of data who wants to predict weather patterns in the Treasure Valley for the next 50 years. Trying to make those computations on an average desktop computer would likely take decades to complete, which is why the university has debuted a new high-performance computing (HPC) cluster capable of synthesizing complex computational problems in minutes instead of years.
In January, the R2 cluster was installed in the Riverfront Hall server room. It’s about the size of two large cabinets and while there are several other clusters currently on campus, this is the largest, fastest one. Unlike the others, any researcher on campus can submit a request to use it – in fact, multiple researchers can use it at once.
“The university has reached a size and scale with enough researchers who need a cluster on campus,” explained Steven Cutchin, director of research computing. “The high performance cluster is not as big as national clusters, but for our faculty, it gives them the ability to process big data sets, run their programs, produce their research and publish all on campus.”
Cutchin noted that right now wait times to use the cluster are minimal. Here’s how you schedule a time to use the R2 cluster.
To put its computing power in perspective, the average person’s laptop has eight gigabytes of memory. A researcher’s computer typically has 32 gigabytes. “Even if the data you have can fit into your desktop system, you only have one of them. So if you run it on one machine, sometime in 2025 your job will finish. Whereas on the HPC, it’ll finish in a week. It’s really that fast: jobs that will take five years on your desktop system will take a week to finish on the HPC,” Cutchin said.
That’s because the R2 cluster has 192 gigabytes of memory held on each of 22 nodes, which act as the brains of the system. These nodes can calculate 137 teraflops of data, or one million million (1012) floating-point operations per second. The fastest computer in the world currently calculates 10 petaflops of data (or 1,000 teraflops).
“We tested it and a job that would take three hours on R1, that same job on a national supercomputer took 20 minutes on 8 nodes. On R2 it took 20 minutes on four nodes, or 13:57 minutes on eight nodes,” said Kelly Byrne, senior HPC engineer and point person for the cluster. “It’s awesome fast.”
The debut of the R2 cluster allows Boise State to be part of an ecosystem that includes large, national HPC centers, such as those funded by the National Science Foundation, which all use HPC units with very large clusters to support their research needs. “Now at Boise state, as part of this ecosystem, we have graduate students and professors with very hard applications – seismology, geology, gene sequencing, all earth science environmental data – they have access to this large compute system,” Cutchin said. “This gives them the ability to tackle large problems, have an impact in their area, and have a pathway to working on research problems on a national scale. It’s a growth route to collaborating with colleagues at larger institutions.”
Another unique aspect of the cluster is the public-private partnership that helped cover its $300,000 price tag.
“We’re very fortunate that Idaho Power wanted to get a compute cluster to do their research on,” Cutchin said. “We purchased and installed the cluster, and Idaho Power bought nodes, effectively donating them with the agreement that the ones they purchased and installed, they could submit jobs to. But when they’re not using them, the whole campus can use them. This is absolutely the ideal public/private partnership. We get more access and they get access to our nodes. It’s about as good as you can get because we got almost double the compute capacity for effectively half the price.”
Idaho Power gets their compute nodes without figuring out how to house and pay for a data center.
“These are hard partnerships to pull off, and this is quite a testimony to the positive relationships, the good working environment, and the strong community support in the Treasure Valley that we at Boise State enjoy,” Cutchin said.
The upgrade helps reaffirm Boise State’s role as a research university of distinction in the Northwest. “Thanks to our new compute capacity, we’re capable of doing jobs at the same scale as some large, national institutions,” said Cutchin. “What’ll happen in the HPC community now, we’ll run bigger jobs. Before, a researcher who wouldn’t have studied something because it was too big, now has the resources so they can. Now we can reconfirm and reproduce research experiments and work with colleagues at other institutions.”