What is Mathematica?

Wolfram Mathematica is a commercially licensed (non-free) program for performing mathematical symbolic computation software. It is used in many scientific, engineering, mathematical, and computing fields.

Accessing Mathematica

To use Mathamatica on Flux or Armis, you must load its software module. Run module avail mathematica to find out what versions of Mathametica are available on the cluster, and then module load mathematica in order to access the version of Mathematica you want to run.  If you do not explicitly specify a version (as in the example below), you will get the version of Mathematica that is the default version on the cluster at the time.

[markmont@flux-login2 ~]$ module avail mathematica

--------------------------- /sw/arc/centos7/modulefiles ---------------------------
 mathematica/10.3.1 mathematica/11.1.0 (D)

 D: Default Module

Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any
of the "keys".

[markmont@flux-login2 ~]$ module load mathematica/10.3.1
[markmont@flux-login2 ~]$

Running Mathematica

Running Mathematica with a graphical interface using ARC Connect

ARC Connect is usable only with Flux; it cannot be used to access Armis.

  1. In your web browser on your local computer, go to https://connect.arc-ts.umich.edu/ and start a VNC session.  Detailed instructions are available in the ARC Connect documentation.
  2. Run the following commands in the terminal inside the VNC session:
    module load mathematica

Running Mathematica in an interactive job

Start an interactive job. See the instructions for starting an interactive job on the cluster.

You can then use Mathematica from the command line inside the interactive job by running the command


Or you can use Mathematica with a graphical interface by running the command


following the instructions that are displayed, and then in the terminal window of the resulting VNC session running the command



Running Mathematica in a non-interactive job

Use a standard PBS script and at the end include the Mathematica command you want to run.

Here is an example PBS script for a single-node Mathematica job:

####  PBS preamble

#PBS -N sample_job

# Change "bjensen" to your uniqname:
#PBS -M bjensen@umich.edu
#PBS -m abe

# Change the number of cores (ppn=4), amount of memory,
# and walltime to be what you need for your job:
#PBS -l nodes=1:ppn=4,mem=16000mb,walltime=04:00:00
#PBS -j oe

# Change "example_flux" to the name of your Flux allocation:
#PBS -A example_flux
#PBS -q flux

####  End PBS preamble

#  Show list of CPUs you ran on, if you're running under PBS
if [ -n "$PBS_NODEFILE" ] ; then cat $PBS_NODEFILE ; fi

#  Change to the directory you submitted the job from
if [ -n "$PBS_O_WORKDIR" ] ; then cd $PBS_O_WORKDIR ; fi

#  Put your job commands below here.  Change "example.m"
#  to be the name of the file containing the Mathematica
#  commands you want to run.
echo "Started at: " $(date)
wolframscript -v -file example.m
echo "Finish at: " $(date)

Parallel computation in Mathematica

Using sub-kernels (single or multiple nodes)

Most of the methods for parallel computation in Mathematica use sub-kernels.  These methods will work on Flux in both single-node as well as multi-node jobs, but some setup and management is required.

When starting your job, request 1 more core than the number of sub-kernels you want to run.  For example, if you want to evaluate expressions using 8 cores, request 9 cores in your job.  The additional core is used to run the main kernel for Mathematica that coordinates the work of all the sub-kernels.

Here is Mathematica code showing how to evaluate expressions in parallel on Flux and Armis.  Note that starting up the sub-kernels is expensive, so you should do it only once at the beginning of your job and only close the kernels when your code is done.

(* Flux and Armis won't work if Mathematica uses ssh to start remote
   kernels, as is the default.  Have Mathematica use pbsdsh instead: *)
$RemoteCommand = "pbsdsh -h `1` wolfram -wstp -linkmode Connect `4` -linkname '`2`' -subkernel -noinit -nopaclet >/dev/null 2>&1 &"

(* Get the list of cores in this job -- there will be 1 item for each core on each node.
   Remove the first core from the list since it is for the main kernel. *)
corelist = Drop[ReadList[Environment["PBS_NODEFILE"], String], 1]

(* Start 1 kernel for every core in the list: *)
Do[LaunchKernels[RemoteMachine[i,1]], {i, corelist}]

(* At this point, you can use the sub-kernels to evaluate expressions in parallel.
   Here is a trivial example that just returns the node name and process ID for each
   kernel -- replace this line with the commands you need to run: *)
ParallelEvaluate[{$MachineName, $ProcessID}]

(* When you are done with all computation, be sure to shut down all the kernels before ending the job: *)

When using sub-kernels on multiple nodes, it is often the case that the main kernel will need a much greater amount of memory than the sub-kernels need, since the main kernel will be doing the non-parallel parts of the computation.  There is no way on Flux and Armis to request more memory for the first node than for additional nodes, but you can work around the problem by increasing the number of cores per node while dropping additional items from the front of the core list.  For example, you can request

#PBS -l nodes=5:ppn=8,pmem=4000mb

and then generate the core list using

corelist = Drop[ReadList[Environment["PBS_NODEFILE"], String], 8] (* drop the first 8 cores and their RAM for use by the main kernel *)

This will effectively give your job 32,000 MB RAM and 8 cores for the main kernel together with 32 sub-kernels each having 4,000 MB RAM and 1 core.

Threading compiled functions over lists (single node / main kernel only)

Mathematica can also use multiple cores on a single node (including in the main kernel of a multi-node job) in order to speed up functions which need to be evaluated for each item in a list, without the need to create and manage sub-kernels.  Just start a job that requests multiple cores on a single node (for example, request 8 cores all on the same node with “#PBS -l nodes=1:ppn=8“) and follow the instructions for running computations in Parallel using the Wolfram System compiler$ProcessorCount will automatically reflect the number of cores requested; there is no need to explicitly set SystemOptions["ParallelOptions" -> "ParallelThreadNumber"].

Additional information

Additional information is available on the Wolfram web site at https://www.wolfram.com/mathematica/ For any Flux- or Armis-specific assistance running Mathematica, contact hpc-support@umich.edu.