Difference between revisions of "DDT"
Moskalenko (talk | contribs) |
|||
(39 intermediate revisions by 7 users not shown) | |||
Line 1: | Line 1: | ||
− | [[Category:Software]][[Category: | + | [[Category:Software]][[Category:Utility]] |
{|<!--CONFIGURATION: REQUIRED--> | {|<!--CONFIGURATION: REQUIRED--> | ||
|{{#vardefine:app|DDT}} | |{{#vardefine:app|DDT}} | ||
− | |{{#vardefine:url| | + | |{{#vardefine:url|https://www.linaroforge.com/}} |
<!--CONFIGURATION: OPTIONAL (|1}} means it's ON)--> | <!--CONFIGURATION: OPTIONAL (|1}} means it's ON)--> | ||
|{{#vardefine:conf|}} <!--CONFIGURATION--> | |{{#vardefine:conf|}} <!--CONFIGURATION--> | ||
Line 14: | Line 14: | ||
|} | |} | ||
<!--BODY--> | <!--BODY--> | ||
− | + | Linaro Forge combines Linaro DDT, the leading debugger for time-saving high performance application debugging, Linaro MAP, the trusted performance profiler for invaluable optimization advice across native and Python HPC codes, and Linaro Performance Reports for advanced reporting capabilities. Linaro DDT and Linaro MAP are also available as standalone products. | |
− | |||
− | |||
− | |||
− | + | ==Using DDT== | |
− | + | First, download the Linaro Forge remote client and install it on your computer: | |
− | + | ||
− | + | * https://www.linaroforge.com/freeTrial/ | |
− | + | '''IMPORTANT NOTE''': | |
− | + | Please make sure that the version of Arm Forge remote client installed on your computer is consistent with the version of Arm Forge module you use on hipergator. The instructions below assume that the version of your Arm Forge local client is version 23.0. | |
+ | |||
+ | Create a batch submission script for the application you want to debug as though you were going to submit it to the queue. Be sure that you add the following sequence before the <code>mpiexec</code> command in your script: | ||
+ | |||
+ | <pre>ddt --connect --debug --log ddt-debug.log </pre> | ||
+ | |||
+ | For newer (3.x+) builds of OpenMPI, you may need to use, | ||
+ | |||
+ | <pre>ddt --connect --mpi='OpenMPI (Compatibility)' <some_mpi_executable></pre> | ||
+ | <div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;"> | ||
+ | ''Expand this section to view reference example.'' | ||
+ | <div class="mw-collapsible-content" style="padding: 5px;"> | ||
+ | <pre> | ||
+ | #!/bin/bash | ||
+ | #SBATCH --job-name=example-job #A name for your job | ||
+ | #SBATCH --output my_job-%j.out #Output File | ||
+ | #SBATCH --error my_job-%j.err #Error File | ||
+ | #SBATCH --mail-type=FAIL,END #What emails you want | ||
+ | #SBATCH --mail-user=<username>@ufl.edu #Where | ||
+ | #SBATCH --nodes=1 #No. of processors requested | ||
+ | #SBATCH --ntasks=1 #Total no. of processors | ||
+ | #SBATCH --cpus-per-task=1 #No. CPUs per task | ||
+ | #SBATCH --mem-per-cpu=2000mb #Per processor memory request | ||
+ | #SBATCH --time=12:00:00 #Walltime in hh:mm:ss or d-hh:mm:ss | ||
+ | #SBATCH --partition=hpg-dev | ||
+ | |||
+ | module load <modules_required_for_your_application> | ||
+ | module load ddt/23.0 | ||
+ | |||
+ | #mpiexec ./main > resid.dat | ||
+ | ddt --connect --debug --log ddt-debug.log mpiexec ./main > resid.dat | ||
+ | </pre> | ||
+ | </div> | ||
+ | </div> | ||
+ | Submit the job with | ||
+ | <pre>cd /blue/<group>/<username>/example-directory | ||
+ | sbatch example-job </pre> | ||
+ | Next, start a desktop or console session in OnDemand. | ||
+ | |||
+ | Once both your debugging job and console or Desktop session are running you can connect: | ||
+ | |||
+ | # Start the remote client GUI in the OnDemand session with | ||
+ | module load ddt | ||
+ | forge | ||
+ | # In the "Remote Launch" pulldown menu, select "Configure..." | ||
+ | # Click "Add" and fill in the following information: <br> '''Connection Name:''' ufrc-ddt <br> '''Host Name:''' <username>@GUI_NODE.rc.ufl.edu<br> '''Remote Installation Directory:''' /apps/arm/forge/23.0 <br> '''Remote Script:''' <leave blank> <br>Note that GUI_NODE corresponds to the gui server the gui job is running on e.g. i21a-s3. | ||
+ | # Click "OK" | ||
+ | # Click "Close" | ||
+ | # On the main screen, select 'ufrc-ddt' from the Remote Launch pulldown menu | ||
+ | |||
+ | You will get a connection request from ddt running your job. Accept the request and you should be ready to start debugging your application. | ||
+ | |||
+ | ==SSH Tunnel== | ||
+ | If you would prefer to run the Forge client on your local computer you'll have to forward the port DDT listens on to your local computer. Once you know what compute node and port DDT is running on you can create an ssh tunnel from your local computer to the ddt instance. E.g. | ||
+ | ssh -N -L 8080:c12345a-s42.ufhpc:37546 albert.gator@hpg.rc.ufl.edu | ||
<!--Modules--> | <!--Modules--> | ||
− | == | + | ==Environment Modules== |
− | + | Run <code>module spider {{#var:app}}</code> to find out what environment modules are available for this application. | |
− | |||
− | |||
− | < | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==System Variables== | ==System Variables== | ||
− | * HPC_{{ | + | * HPC_{{uc:{{#var:app}}}}_DIR |
− | * HPC_{{ | + | * HPC_{{uc:{{#var:app}}}}_BIN |
<!--Configuration--> | <!--Configuration--> | ||
{{#if: {{#var: conf}}|==Configuration== | {{#if: {{#var: conf}}|==Configuration== |
Latest revision as of 19:24, 30 October 2023
Linaro Forge combines Linaro DDT, the leading debugger for time-saving high performance application debugging, Linaro MAP, the trusted performance profiler for invaluable optimization advice across native and Python HPC codes, and Linaro Performance Reports for advanced reporting capabilities. Linaro DDT and Linaro MAP are also available as standalone products.
Using DDT
First, download the Linaro Forge remote client and install it on your computer:
IMPORTANT NOTE: Please make sure that the version of Arm Forge remote client installed on your computer is consistent with the version of Arm Forge module you use on hipergator. The instructions below assume that the version of your Arm Forge local client is version 23.0.
Create a batch submission script for the application you want to debug as though you were going to submit it to the queue. Be sure that you add the following sequence before the mpiexec
command in your script:
ddt --connect --debug --log ddt-debug.log
For newer (3.x+) builds of OpenMPI, you may need to use,
ddt --connect --mpi='OpenMPI (Compatibility)' <some_mpi_executable>
Expand this section to view reference example.
#!/bin/bash #SBATCH --job-name=example-job #A name for your job #SBATCH --output my_job-%j.out #Output File #SBATCH --error my_job-%j.err #Error File #SBATCH --mail-type=FAIL,END #What emails you want #SBATCH --mail-user=<username>@ufl.edu #Where #SBATCH --nodes=1 #No. of processors requested #SBATCH --ntasks=1 #Total no. of processors #SBATCH --cpus-per-task=1 #No. CPUs per task #SBATCH --mem-per-cpu=2000mb #Per processor memory request #SBATCH --time=12:00:00 #Walltime in hh:mm:ss or d-hh:mm:ss #SBATCH --partition=hpg-dev module load <modules_required_for_your_application> module load ddt/23.0 #mpiexec ./main > resid.dat ddt --connect --debug --log ddt-debug.log mpiexec ./main > resid.dat
Submit the job with
cd /blue/<group>/<username>/example-directory sbatch example-job
Next, start a desktop or console session in OnDemand.
Once both your debugging job and console or Desktop session are running you can connect:
- Start the remote client GUI in the OnDemand session with
module load ddt forge
- In the "Remote Launch" pulldown menu, select "Configure..."
- Click "Add" and fill in the following information:
Connection Name: ufrc-ddt
Host Name: <username>@GUI_NODE.rc.ufl.edu
Remote Installation Directory: /apps/arm/forge/23.0
Remote Script: <leave blank>
Note that GUI_NODE corresponds to the gui server the gui job is running on e.g. i21a-s3. - Click "OK"
- Click "Close"
- On the main screen, select 'ufrc-ddt' from the Remote Launch pulldown menu
You will get a connection request from ddt running your job. Accept the request and you should be ready to start debugging your application.
SSH Tunnel
If you would prefer to run the Forge client on your local computer you'll have to forward the port DDT listens on to your local computer. Once you know what compute node and port DDT is running on you can create an ssh tunnel from your local computer to the ddt instance. E.g.
ssh -N -L 8080:c12345a-s42.ufhpc:37546 albert.gator@hpg.rc.ufl.edu
Environment Modules
Run module spider DDT
to find out what environment modules are available for this application.
System Variables
- HPC_DDT_DIR
- HPC_DDT_BIN