The National Institute for Computational Sciences



Category: Job Launcher


ibrun may be used to launch multi-binary MPI applications between hosts and Xeon Phi (MIC) co-processors. It can also be used to launch Phi only applications that communicate via MPI. This utility is ported from TACC's ibrun.symm script.


ibrun is available as a module. Usage: ibrun.symm -m ./ -c ./ Note: To specify an executable to run on two, three, or four MICs per node specify -2, -3, or -4 instead of -m Relevant MIC Environment Variables: MIC_MY_NSLOTS = total # of MIC MPI tasks (Default=$MIC_PPN * # MIC cards) MIC_PPN = # of MPI tasks per MIC (Default=4) MIC_OMP_NUM_THREADS = # of threads per MIC MPI task (Default=30) Relevant Host Environment Variables: MY_NSLOTS = total # of MPI host tasks (Default=$HOST_PPN * # nodes) HOST_PPN = # of MPI tasks per node (Default=16) OMP_NUM_THREADS = # of threads per host MPI task (Default=1)

An example PBS script to use ibrun:

#PBS -l nodes=3
#PBS -l walltime=01:00:00
#PBS -N your_jobname

export MIC_PPN=2
ibrun.symm -4 ./mpihello.mic -c ./

When submitted, the job script above will start 48 host MPI tasks spread across 3 nodes plus 2 MPI tasks on 12 MIC cards resulting in 72 MPI tasks total. Each MPI task on the host will use 2 threads/task while each MPI task on the MIC will use 60 threads/task. Note that the tasks will be allocated in consecutive order on the nodes, e.g. NODE1: 16 host tasks ( 0-15 ) 8 MIC tasks (mic0: 16-17, mic1: 18-19, mic2: 20-21, mic3: 22-23 ) NODE2: 16 host tasks ( 24-39 ) 8 MIC tasks (mic0: 40-41, mic1: 42-43, mic2: 44-45, mic3: 46-47 ) NODE3: 16 host tasks ( 48-63 ) 8 MIC tasks (mic0: 64-65, mic1: 66-67, mic2: 68-69, mic3: 70-71 )


This package has the following support level : Supported

Available Versions

Version Available Builds