Allinea DDT is a graphical source code debugging analysis tool with the power that you need to take control of software bugs whenever they occur, making it simpler to solve even the most complex multi-threaded or multi-process software problems. Includes Allinea MAP for performance analysis.
This module enables the use of the TotalView software.
First, download the Allinea Forge remote client and install it on your computer:
Create a batch submissions script for the application you want to debug as though you were going to submit it to the queue. Be sure that you add the following sequence before the
mpiexec command in your script:
ddt --connect --degbug --log ddt-debug.log \
If you need more help creating the script, please use the example job script further down the page. Then, proceed with the following steps:
- Start the remote client on your personal device
- In the "Remote Launch" pulldown menu, select "Configure..."
- Click "Add" and fill in the following information:
Connection Name: ufrc-ddt
Host Name: <username>@hpg2.rc.ufl.edu <username>@i21b-s4.ufhpc
Remote Installation Directory: /apps/allinea/forge/6.0
Remote Script: <leave blank>
- Click "OK"
- Click "Close"
- On the main screen, select 'ufrc-ddt' from the Remote Launch pulldown menu
- From another window (PuTTY/Terminal/etc.), log into hpg2.rc.ufl.edu as you normally would.
- Modify the following command appropriately and use it to schedule your job:
cd /ufrc/<group>/<username>/example-directory <br> sbatch example-job
- It may take up to 90 seconds for your job to start; once it does, you will get a connection request from ddt running your job. Accept the request and you should be ready to start debugging your application.
Example Job Script
#!/bin/bash #SBATCH --job-name=example-job #A name for your job #SBATCH --output my_job-%j.out #Output File #SBATCH --error my_job-%j.err #Error File #SBATCH --mail-type=FAIL,END #What emails you want #SBATCH --mail-user=<username>@ufl.edu #Where #SBATCH --nodes=1 #Request a single processor #SBATCH --ntasks=1 #Tottal no. of processors #SBATCH --cpus-per-task=1 #No. CPUs per task #SBATCH --mem-per-cpu=2000mb #Per processor memory request #SBATCH --time=12:00:00 #Walltime in hh:mm:ss or d-hh:mm:ss #SBATCH --partition=hpg2-dev module load intel/2016.0.109 openmpi/1.10.2 petsc/3.7.0 metis/5.1.0 module load ddt/6.0 cd $SLURM_SUBMIT_DIR which mpiexec printenv hostname #mpiexec ./main > resid.dat ddt --connect --degbug --log ddt-debug.log \ mpiexec ./main > resid.dat