Difference between revisions of "DDT"

From UFRC
Jump to navigation Jump to search
m (Text replacement - "hpg2-dev" to "hpg-dev")
(9 intermediate revisions by 2 users not shown)
Line 21: Line 21:
 
First, download the Arm Forge remote client and install it on your computer:
 
First, download the Arm Forge remote client and install it on your computer:
  
* https://developer.arm.com/products/software-development-tools/hpc/downloads/download-arm-forge
+
* https://developer.arm.com/tools-and-software/server-and-hpc/downloads/arm-forge
 
'''IMPORTANT NOTE''':  
 
'''IMPORTANT NOTE''':  
Please make sure that the version of Arm Forge remote client installed on your computer is consistent with the version of Arm Forge module you use on hipergator. The instructions below assume that the version of your Arm Forge local client is version 18.0.2.
+
Please make sure that the version of Arm Forge remote client installed on your computer is consistent with the version of Arm Forge module you use on hipergator. The instructions below assume that the version of your Arm Forge local client is version 20.0.
  
 
Create a batch submission script for the application you want to debug as though you were going to submit it to the queue. Be sure that you add the following sequence before the <code>mpiexec</code> command in your script:
 
Create a batch submission script for the application you want to debug as though you were going to submit it to the queue. Be sure that you add the following sequence before the <code>mpiexec</code> command in your script:
Line 35: Line 35:
 
The following example is provided for reference.
 
The following example is provided for reference.
  
<source lang=bash>
+
<pre>
 
#!/bin/bash
 
#!/bin/bash
 
#SBATCH --job-name=example-job  #A name for your job
 
#SBATCH --job-name=example-job  #A name for your job
Line 47: Line 47:
 
#SBATCH --mem-per-cpu=2000mb  #Per processor memory request
 
#SBATCH --mem-per-cpu=2000mb  #Per processor memory request
 
#SBATCH --time=12:00:00      #Walltime in hh:mm:ss or d-hh:mm:ss
 
#SBATCH --time=12:00:00      #Walltime in hh:mm:ss or d-hh:mm:ss
#SBATCH --partition=hpg2-dev
+
#SBATCH --partition=hpg-dev
  
 
module load <modules_required_for_your_application>
 
module load <modules_required_for_your_application>
module load ddt/18.0.2
+
module load ddt/20.0
  
 
#mpiexec  ./main > resid.dat
 
#mpiexec  ./main > resid.dat
 
ddt --connect --debug --log ddt-debug.log \  
 
ddt --connect --debug --log ddt-debug.log \  
 
   mpiexec  ./main > resid.dat
 
   mpiexec  ./main > resid.dat
</source>
+
</pre>
  
 
Submit the job with
 
Submit the job with
<source lang=bash>cd /ufrc/<group>/<username>/example-directory
 
sbatch example-job
 
</source>
 
  
 +
cd /blue/<group>/<username>/example-directory
 +
sbatch example-job
  
Next, start a [[GUI Programs|gui job]] of any kind on HiPerGator e.g.
+
Next, start an xterm session in OnDemand.
  
launch_gui_session -e xterm -t 8
+
Once both your debugging job and xterm session are running you can connect:
to get an 8-hour gui session.
 
 
 
Once both your debugging job and a gui job are running you can connect:
 
  
 
# Start the remote client GUI on your personal device
 
# Start the remote client GUI on your personal device
 
# In the "Remote Launch" pulldown menu, select "Configure..."
 
# In the "Remote Launch" pulldown menu, select "Configure..."
# Click "Add" and fill in the following information: <br> '''Connection Name:''' ufrc-ddt <br> '''Host Name:''' <username>@GUI_NODE.rc.ufl.edu<br> '''Remote Installation Directory:''' /apps/arm/forge/18.0.2 <br> '''Remote Script:''' <leave blank> <br>Note that GUI_NODE corresponds to the gui server the gui job is running on e.g. i21a-s3.
+
# Click "Add" and fill in the following information: <br> '''Connection Name:''' ufrc-ddt <br> '''Host Name:''' <username>@GUI_NODE.rc.ufl.edu<br> '''Remote Installation Directory:''' /apps/arm/forge/20.0 <br> '''Remote Script:''' <leave blank> <br>Note that GUI_NODE corresponds to the gui server the gui job is running on e.g. i21a-s3.
 
# Click "OK"
 
# Click "OK"
 
# Click "Close"
 
# Click "Close"
Line 80: Line 76:
  
 
<!--Modules-->
 
<!--Modules-->
==Required Modules==
+
==Environment Modules==
 +
Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application.
  
===Serial===
 
* {{#lowercase:{{#var:app}}}}
 
<!--===Parallel (OpenMP)===
 
* intel
 
* {{#var:app}}
 
===Parallel (MPI)===
 
* intel
 
* openmpi
 
* {{#var:app}}
 
-->
 
 
==System Variables==
 
==System Variables==
* HPC_{{#uppercase:{{#var:app}}}}_DIR
+
* HPC_{{uc:{{#var:app}}}}_DIR
* HPC_{{#uppercase:{{#var:app}}}}_BIN
+
* HPC_{{uc:{{#var:app}}}}_BIN
 
<!--Configuration-->
 
<!--Configuration-->
 
{{#if: {{#var: conf}}|==Configuration==
 
{{#if: {{#var: conf}}|==Configuration==

Revision as of 14:30, 29 September 2021

Arm DDT is a graphical source code debugging analysis tool with the power that you need to take control of software bugs whenever they occur, making it simpler to solve even the most complex multi-threaded or multi-process software problems. This installation Includes Arm DDT, MAP for performance analysis, and Arm Reports for characterizing and understanding the performance of HPC applications.

This module enables the use of the Arm DDT, MAP, and Reports tools.

Using DDT

First, download the Arm Forge remote client and install it on your computer:

IMPORTANT NOTE: Please make sure that the version of Arm Forge remote client installed on your computer is consistent with the version of Arm Forge module you use on hipergator. The instructions below assume that the version of your Arm Forge local client is version 20.0.

Create a batch submission script for the application you want to debug as though you were going to submit it to the queue. Be sure that you add the following sequence before the mpiexec command in your script:

ddt --connect --debug --log ddt-debug.log 

For newer (3.x+) builds of OpenMPI, you may need to use,

ddt --connect --mpi='OpenMPI (Compatibility)' <some_mpi_executable>

The following example is provided for reference.

#!/bin/bash
#SBATCH --job-name=example-job  #A name for your job
#SBATCH --output my_job-%j.out #Output File
#SBATCH --error my_job-%j.err  #Error File
#SBATCH --mail-type=FAIL,END  #What emails you want
#SBATCH --mail-user=<username>@ufl.edu   #Where
#SBATCH --nodes=1     #No. of processors requested
#SBATCH --ntasks=1    #Total no. of  processors
#SBATCH --cpus-per-task=1    #No. CPUs per task
#SBATCH --mem-per-cpu=2000mb   #Per processor memory request
#SBATCH --time=12:00:00       #Walltime in hh:mm:ss or d-hh:mm:ss
#SBATCH --partition=hpg-dev

module load <modules_required_for_your_application>
module load ddt/20.0

#mpiexec  ./main > resid.dat
ddt --connect --debug --log ddt-debug.log \ 
   mpiexec  ./main > resid.dat

Submit the job with

cd /blue/<group>/<username>/example-directory 
sbatch example-job 

Next, start an xterm session in OnDemand.

Once both your debugging job and xterm session are running you can connect:

  1. Start the remote client GUI on your personal device
  2. In the "Remote Launch" pulldown menu, select "Configure..."
  3. Click "Add" and fill in the following information:
    Connection Name: ufrc-ddt
    Host Name: <username>@GUI_NODE.rc.ufl.edu
    Remote Installation Directory: /apps/arm/forge/20.0
    Remote Script: <leave blank>
    Note that GUI_NODE corresponds to the gui server the gui job is running on e.g. i21a-s3.
  4. Click "OK"
  5. Click "Close"
  6. On the main screen, select 'ufrc-ddt' from the Remote Launch pulldown menu

You will get a connection request from ddt running your job. Accept the request and you should be ready to start debugging your application.

Environment Modules

Run module spider DDT to find out what environment modules are available for this application.

System Variables

  • HPC_DDT_DIR
  • HPC_DDT_BIN