Difference between revisions of "Getting Started"

Jump to navigation Jump to search
Line 40: Line 40:
==Interactive work under Linux==
==Interactive work under Linux==
Once you connect to UF HPC you will find yourself in a command line (CLI) environment that may be daunting at first. However, the main operations that you need to do to get your work done at HPC are pretty simple. See the [[Getting_Started_on_Linux]] tutorial for a more verbose introduction to the topics addressed below.
Once you are logged in to an HPC Center server, you will find yourself at a Linux command line prompt. That may be daunting at first. However, you only need to know a small subset of Linux commands to accomplish most tasks. See the [[Getting_Started_on_Linux]] tutorial for an introduction to the topics introduced below.
==Looking Around==
==Looking Around==

Revision as of 16:19, 15 January 2013

Getting an account

To get an account at the UF HPC Center, you need to read the HPC Center Policies and then put a request in at our request page. That page is located here.


Linux / Unix

Open a terminal and run

ssh -Y <YOUR_USERNAME>@submit.hpc.ufl.edu

where <YOUR_USERNAME> is your HPC Center username, which was sent to you when you got your HPC Center account.

The command ssh -Y <YOUR_USERNAME>@submit.hpc.ufl.edu is what you would type in at a command prompt on your system. After this, it asks you for a password, which you type in. After that, you are logged in and ready to work. As a concrete example, if your HPC Center username is "smith", you would use the command ssh smith@submit.hpc.ufl.edu to log into the HPC Center.

The -Y option is used to indicate that X11 forwarding should be enabled on the connection. If your desktop supports an X Windows server, X11 forwarding will allow you to run X Windows clients on the HPC Center's interactive servers and view the display on your desktop. Otherwise, the -Y option is not necessary.


Microsoft Windows does not come with a built-in SSH client. You have to download a client from the web. We recommend the following software:

X Windows for MS Windows

For Windows users who would like to run X Windows applications, there are several X Windows servers available for the MS Windows operating system.


For MacOS users, the connection instructions are very similar to those for Linux/Unix users.

Terminal, the terminal emulation application under MacOS is located in Applications/Utilities.

Both FileZilla and Cyberduck are available for MacOS if you prefer a graphical interface for transferring files.

Getting Help

If you are having problems connecting to the HPC system, please let the HPC Staff know by submitting a Support Request Ticket.

Interactive work under Linux

Once you are logged in to an HPC Center server, you will find yourself at a Linux command line prompt. That may be daunting at first. However, you only need to know a small subset of Linux commands to accomplish most tasks. See the Getting_Started_on_Linux tutorial for an introduction to the topics introduced below.

Looking Around

We expect the users of the HPC Center cluster to already have a working knowledge of the linux operating system, so we are not going to go into detail here on using the operating system. Below are some links to webpages that describe a lot of this, however:

Basic Commands

Command Description
ls List files in the current directory
cd Change directory
more View a file
mkdir <dir> Make a directory
cp file1 file2 Copy a file
mv file1 file2 Move/Rename a file
rm file Remove a file
rmdir dir Remove an empty directory

File System

We have a structured file system that is important to know about. Please read about it here: HPC File System


Editing files on the cluster can be done through a couple of different methods...

In-System Editors

  • vi - vi is the basic editor for a number of people. Using the editor is not necessarily intuitive, so we have provided a link to a tutorial.
    • VI Tutorial
    • There is also a VI tutorial on the system that you can use, called vimtutor. Once logged in, simply type the following on the command line and it will take you on a tutorial for VI:
$ vimtutor
    • Another small resource for vi is right here in our wiki
  • emacs - emacs is a much heavier duty editor, but again has the problem of having commands that are non-intuitive. Again, we have provided a link to a tutorial for this editor.
  • pico - While pico is not installed on the system, nano is installed, and is a clone of pico.
  • nano - nano has a good amount of on-screen help to make it easier to use.

External Editors

You can also use your favorite editor on your local machine and then transfer the files over to the HPC Cluster afterwards. One caveat to this is that with files created on Windows machines, very often extra characters get injected into text files which can cause major problems when it comes time to interpret the files. If this happens, there is a utility called dos2unix that you can use to remove the extra characters.

Running Jobs

General Scheduling

Jobs from faculty investors in the HPC Center are now favored over jobs from groups who did not invest in the HPC Center.

Job scheduling has been a big topic with the HPC committee in the last several months. The HPC Center staff has been directed by the committee to improve the quality of service of job scheduling for jobs coming from investors in the HPC Center. This means reducing the time spent in the queues and allowing jobs from the investors to capture the full share of the resources that they have paid for. The HPC committee recently adopted a document which spells out what they want.

Jobs can be submitted on submit.hpc.ufl.edu.

Torque Scheduler

The Torque Resource Manager has been installed on the HPC cluster, and is slowly being switched over to from the PBS Pro scheduler as time goes on. The Maui scheduler is also running in this environment.

The Torque scheduler is installed on the cluster, and accepts the same commands as the PBS Pro scheduler with a couple of exceptions. Currently we are experimenting with these packages so that we can provide improved scheduling to HPC users. While we are still learning about Torque and Maui, our experiences so far have been good and we are guardedly optimistic that Torque and Maui will end up being the resource manager and scheduler for the HPC Center sometime in the not-too-distant future.

Please note the following.

  • If your job is single-threaded (1 cpu) and does not have heavy I/O requirements, it does not need to run on an infiniband-enabled node. In that case, you should include the "gige" property in your PBS resource specification as follows
#PBS  -l nodes=1:ppn=1
  • If you need to run an mpi-based application that has not been rebuilt for OpenMPI 1.2.0+Torque, please send us a note and we'll be happy to rebuild what you need - first come, first serve.
  • If you build your own MPI-based application executables, you should use the MPI compiler wrappers (mpif90, mpicc, mpiCC) in /opt/intel/ompi/1.2.0/bin. These wrappers will automatically pull in the correct libraries.
  • We will continue to tune the maui scheduler to provide fair and efficient scheduling according to the policies established by the HPC Committee and within the capabilities of the maui scheduler. Keep in mind that these policies include priority and quality-of-service commitments to those faculty who have invested in the resources within the HPC Center.

Trivial Example

#! /bin/sh
#PBS -N testjob
#PBS -o testjob.out
#PBS -e testjob.err
#PBS -r n
#PBS -l walltime=00:01:00
#PBS -l nodes=1:ppn=1
#PBS -l pmem=100mb


module load python
python -V

To submit this job from submit.hpc.ufl.edu, you would use the following command:

$ qsub <your job script>

To check the status of running jobs, you would use the following command:

$ qstat [-u <username>]

or HPC --> Utilization --> Torque Queue Status


  • More Sample Scripts for more information on PBS scripts.
  • See Modules for more information on using the installed software via the environment modules system.

Notes on Batch Scripts

  • The script can handle only one set of directives. Do not submit a script that has more than one set of directives included in it, as this will cause the Moab/Torque system to reject the script with a qsub: Job rejected by all possible destinations error. This problem was first seen when a user complained about a script that was being rejected with this error. Upon further inspection of their script, it was found that the script had concatenated versions of itself in the same file.
  • For a more detailed explanation of what is going on in a batch script

Troubleshooting Batch Scripts

  • Ensure that you are using the proper MPI launcher command for your software. At the UF HPC Center, that command is mpiexec.

Compiling your own

By default, when you first login to the system you have compilers setup for the Intel OpenMPI compilers. This gives you access to C, C++, F77 and F90 compilers in your path, as well as the mpicc and mpif90 compilers for OpenMPI applications. If you want to change this, you can use the modules system to select a different compiler suite and MPI implementation.