HFSS is the industry-standard software for S-parameter, Full-Wave SPICE traction, and 3D electromagnetic field simulation of high-frequency and high-speed components. Engineers rely on the accuracy, capacity, and performance of HFSS to design on-chip embedded passives, IC packages, PCB interconnects, antennas, RF/microwave components, and biomedical devices.

Accessing HFSS

To use HFSS, you need to load its module with

$ module load hfss

Notes on using HFSS

These directions are for version 15.0 and 2014.0.0 and later of HFSS.

There are sections below with examples for running HFSS on a single node parallel processing using shared memory and for running HFSS on multinode parallel processing using either a distributed solve option (DOM) or domain decomposition (DDM) method.

HFSS does have limited GPU acceleration capability, but only for transient analysis. If you wish to use this, please contact the HPC Group.

HFSS will, by default, use /tmp on whatever machine it is running on. The /tmp partitions on Flux nodes are not large, so it is possible to fill them and interfere with other jobs. If you are running a large simulation, you should set the temporary directory to something else, for example, to your project directory under /scratch.

To set the temporary directory, you use the command line option -batchoptions, as in the following example (the character indicates that the next line is a continuation of the current command).

hfss -local -Ng -BatchSolve 
    -batchoptions 'TempDirectory="/scratch/default_flux/grundoon/tmp"' 
    ogive-IE.hfss

Note, the whole string argument to -batchoptions is enclosed in single quotes, then inside those, the directory name is enclosed in double quotes.

Running HFSS interactively

You should only run HFSS interactively – outside of PBS – for very short, test runs.

We will copy the ogive-IE.hfss input file from the HFSS Examples directory to use for the example. Here is an example of running HFSS in batch. The -Ng option suppresses trying to open the GUI and is required here because there is no X display. The variable $HFSS_ROOT is set by the module, and we use it to copy the example data file.

$ cp $HFSS_ROOT/Examples/RF_Microwave/ogive-IE.hfss ./
$ hfss -local -Ng -BatchSolve ogive-IE.hfss

You may wish to use the -Monitor option to hfss, which should be added before the -Ng option and will print progress reports while the job is running.

Running HFSS from PBS

To run the same commands as above, put them into a PBS script and submit it as a job. Here is an example PBS script that you can modify.

#!/bin/bash
####  PBS preamble
#PBS -N hfss_test
#PBS -M uniqname@umich.edu
#PBS -m abe

#PBS -l nodes=1:ppn=1,pmem=1gb,walltime=24:00:00
#PBS -j oe
#PBS -V

#PBS -A example_flux
#PBS -l qos=flux
#PBS -q flux
####  End PBS preamble

[ -f $PBS_NODEFILE != "x" ] cat $PBS_NODEFILE
[ -d $PBS_O_WORKDIR ] && cd $PBS_O_WORKDIR

#  Put your job commands after this line

#  Copy the example file
cp $HFSS_ROOT/Examples/RF_Microwave/ogive-IE.hfss ./
#  Run HFSS
hfss -local -Ng -BatchSolve ogive-IE.hfss

Running HFSS in single node shared memory parallel (SMP) mode

Not all simulation types are parallelizable, but the largest number of those that are can take advantage of shared memory parallel processing. This type of parallel processing is confined to a single node. The multinode parallel options will be shown in the next section.

To run an SMP simulation, you need to request from PBS more than one processor on the node with -l ppn=N. In the example below, we request and use 4 processors.

#!/bin/bash
####  PBS preamble
#PBS -N hfss_test
#PBS -M uniqname@umich.edu
#PBS -m abe

#PBS -l nodes=1:ppn=4,pmem=1gb,walltime=24:00:00
#PBS -j oe
#PBS -V

#PBS -A example_flux
#PBS -l qos=flux
#PBS -q flux
####  End PBS preamble

[ -f $PBS_NODEFILE != "x" ] cat $PBS_NODEFILE
[ -d $PBS_O_WORKDIR ] && cd $PBS_O_WORKDIR

#  Put your job commands after this line

cp $HFSS_ROOT/Examples/RF_Microwave/ogive-IE.hfss ./
hfss -local -Ng -Monitor -BatchSolve 
   -batchoptions "HFSS-IE/HPCLicenseType=pool" 
   ogive-IE.hfss

Running HFSS in mixed multicore, multinode parallel mode

This job type lets HFSS use multiple nodes to ether run multiple values in a sweep simultaneously using the distributed solve option (DSO) or to take a very large model and and use the domain decomposition method (DDM) to run a single model one multiple nodes. There are fewer simulation types supported using these methods. If you will be using fewer than 12 processors for anything other than sweeps, please use the MP option above to obtain the best performance. DDM and DSO require atleast 3 unique nodes. Contact support, as the system will sometimes condense requests onto fewer nodes.

Fast sweeps do not support DSO as of HFSS 15.0 (2014.0.0).

To use DDM, this must be set in your analysis properties under your solver type. For more details on supported configurations for HPC/multinode options options in HFSS refer to the HFSS documentation and this Ansys Presentation.

Here is a sample PBS script that shows a 32 processor, 4 node HFSS job.

#!/bin/bash
####  PBS preamble
#PBS -N hfss_test
#PBS -M uniqname@umich.edu
#PBS -m abe

#PBS -l procs=32,tpn=8,pmem=1gb,walltime=24:00:00
#PBS -j oe
#PBS -V

#PBS -A example_flux
#PBS -l qos=flux
#PBS -q flux
####  End PBS preamble

[ -f $PBS_NODEFILE != "x" ] cat $PBS_NODEFILE
[ -d $PBS_O_WORKDIR ] && cd $PBS_O_WORKDIR

#  Put your job commands after this line

#  Copy the example file
$ cp $HFSS_ROOT/Examples/HFSS/Antennas/helical_antenna.hfss ./

#  Run HFSS using DDM
hfss -Ng -Monitor -BatchSolve 
   -batchoptions "HFSS/HPCLicenseType=pool" 
   helical_antenna.hfss