How can I submit jobs that will run where ever there are free cpus?
SHARCNET clusters differ in several ways: access to particular storage and cluster node properties. For instance, if you submit a job which refers to files in /work or /scratch, it may currently only run on that particular cluster. Similarly, a job may require, for instance, a very large amount of memory per processor, only available on Bull. But some jobs which do little IO, and which are serial and use modest amounts of memory may be run using the “global jobs” facility.