Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

How many SUs will I be charged to run a 3-hour job that requires 384 GB on the large memory nodes?

0
Posted

How many SUs will I be charged to run a 3-hour job that requires 384 GB on the large memory nodes?

0

Jobs on the PDAF and PDAFM (large memory) nodes are allocated on the basis of processing cores required. There are 32 cores on Triton’s PDAF and PDAFM nodes. A 256-GB (PDAF) node has 8 GB of memory per core, and a 512-GB (PDAFM) node has 16 GB per core. The PDAF and PDAFM nodes are allocated in 128GB chunks (either eight or 16 cores). For a 384-GB job, users must request either 384 GB (24 Cores) or 512 GB (32 Cores). A 384-GB request may result in sharing the node with another process (to consume the remaining 128 GB). For exclusive access to the node, a user must request 512 GB of memory, and would be allocated 32-cores. The charge would be: 3 hours * 32 Cores * Price/Core = 96 * (4 * BaseSU) = 384 BaseSU For Shared Access to the node, the calculation would be: 3 hours * 24 Cores * (4 * BaseSU) = 288 BaseSU To run a job on the large memory nodes, your batch script should specify the queue “large”: #PBS -q large One additional complication: due to overhead, a small amount of memory is

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123