Currently Flux resources are tied together using four Force10 C300s, providing non-blocking gigabit speeds at normal Ethernet latency between nodes. The switches are connected with 10Gbps or 20Gbps connections. Access to the storage on the cluster is through this network.

The connection to the greater university network and internet is provided by a pair of 10Gbps links.

All of Flux uses Infiniband for MPI messages. Infiniband provides user-level, one-way data rates with a maximum theoretical bandwidth of 40Gb/s. On the Flux nodes the maximum measured bandwidth is about 25Gb/s, or about 25x faster than Ethernet. The measured latency of the Infiniband network is about 1.8us, or 33 times faster than Ethernet.

[expand title=”What is Infiniband?”]Infiniband is a high bandwidth low latency network commonly found on clusters.[/expand] [expand title=”How fast is Infiniband? What is its latency?]Infiniband is a 40Gbps network. The latency of Infiniband is about one order of magnitude less than Ethernet. This puts Infiniband at less than 2 usec latency, since many codes are very sensitive to latency, this can offer a large performance gain.[/expand] [expand title=”Do I need to recompile my code to use Infiniband?]If you compiled your code with our default MPI library on Flux, you do not need to recompile.[/expand] [expand title=”How do I compile my code to use Infiniband?]Compile your code using our standard MPI compilers (mpicc, mpif90, etc.)[/expand] [expand title=”I compiled my code for Infiniband. Can I run it on TCP (Ethernet) only nodes?] Yes! This is the benefit of the MPI library that we provide. Code compiled for Infiniband will also have the ability to run on Ethernet without recompiling. If mpirun finds that some of the nodes given to you do not have Infiniband then those nodes will use Ethernet. The nodes with Infiniband will use Infiniband.[/expand] [expand title=”What advantages does Infiniband give me?”]Infiniband allows bandwidth and latency sensitive codes to scale farther. For a example a code that scales to 12 CPUs on Ethernet might get to 24 on Infiniband. If a code does not communicate often (like Monti Carlo codes) Infiniband will not improve your code’s performance.[/expand]