Hybrid MPI-OpenMP programming is complicated. Are there other alternatives?
If your code exceeds available memory on Hopper the easiest solution may be to run with fewer active cores per node leaving some cores idle. However, this is inefficient because you may need to use more nodes and your NERSC repo will be charged for all the nodes you use. See the user documentation pages for information on how to run with fewer cores per node. We believe that MPI + OpenMP is actually the simplest approach to hybrid programming. Other alternatives include MPI plus explicit threading, one-sided communication models, or MPI plus Partitioned Global Address Space (PGAS) programming models such as UPC, or PGAS models alone. These methods require considerably more code rewrite.