Getting an account
Linux / Unix
Open a terminal and run
ssh -Y <YOUR_USERNAME>@submit.hpc.ufl.edu
<YOUR_USERNAME> is your HPC Center username, which was sent to you when you got your HPC Center account.
ssh -Y <YOUR_USERNAME>@submit.hpc.ufl.edu is what you would type in at a command prompt on your system. After this, it asks you for a password, which you type in. After that, you are logged in and ready to work. As a concrete example, if your HPC Center username is "smith", you would use the command
ssh firstname.lastname@example.org to log into the HPC Center.
-Y option is used to indicate that X11 forwarding should be enabled on the connection. If your desktop supports an X Windows server, X11 forwarding will allow you to run X Windows clients on the HPC Center's interactive servers and view the display on your desktop. Otherwise, the
-Y option is not necessary.
Microsoft Windows does not come with a built-in SSH client. You have to download a client from the web. We recommend the following software:
- SSH client - Putty
- Graphical file transfer clients:
X Windows for MS Windows
For Windows users who would like to run X Windows applications, there are several X Windows servers available for the MS Windows operating system.
For MacOS users, the connection instructions are very similar to those for Linux/Unix users.
Terminal, the terminal emulation application under MacOS is located in Applications/Utilities.
If you are having problems connecting to the HPC system, please let the HPC Staff know by submitting a Support Request Ticket.
Interactive work under Linux
Once you are logged in to an HPC Center server, you will find yourself at a Linux command line prompt. That may be daunting at first. However, you only need to know a small subset of Linux commands to accomplish most tasks. See the Getting_Started_on_Linux tutorial for an introduction to the topics introduced below.
While it is advantageous to have a working knowledge of the most common Linux commands, it is not a requirement. For the uninitiated, the following information may be useful as well as a good "Introduction to Using Linux" book.
|ls||List files in the current directory|
|more||View a file's contents|
|mkdir <dir>||Create a directory|
|cp file1 file2||Copy a file|
|mv file1 file2||Move (i.e. rename) a file|
|rm file||Delete a file|
|rmdir dir||Delete an empty directory|
We have a structured file system that is important to know about. Please read about it here: HPC File System
Editing files on the cluster can be done through a couple of different methods...
- vi - The visual editor (vi) is the traditonal Unix editor. However, it is not necessarily the most intuitive editor. That being the case, if you are unfamiliar with it, the following tutorial may be useful.
- VI Tutorial
- There is also a VI tutorial on the system that you can use, called vimtutor. Once logged in, simply type the following on the command line and it will take you on a tutorial for VI:
- Another small resource for vi is right here in our wiki
- emacs - emacs is a much heavier duty editor, but again has the problem of having commands that are non-intuitive. Again, we have provided a link to a tutorial for this editor.
- pico - While pico is not installed on the system, nano is installed, and is a clone of pico.
- nano - nano has a good amount of on-screen help to make it easier to use.
You can also use your favorite editor on your local machine and then transfer the files over to the HPC Cluster afterwards. One caveat to this is that with files created on Windows machines, very often extra characters get injected into text files which can cause major problems when it comes time to interpret the files. If this happens, there is a utility called
dos2unix that you can use to remove the extra characters.
Jobs from faculty investors in the HPC Center are now favored over jobs from groups who did not invest in the HPC Center.
Job scheduling has been a big topic with the HPC committee in the last several months. The HPC Center staff has been directed by the committee to improve the quality of service of job scheduling for jobs coming from investors in the HPC Center. This means reducing the time spent in the queues and allowing jobs from the investors to capture the full share of the resources that they have paid for. The HPC committee recently adopted a document which spells out what they want.
Jobs can be submitted on submit.hpc.ufl.edu.
The Torque Resource Manager has been installed on the HPC cluster, and is slowly being switched over to from the PBS Pro scheduler as time goes on. The Maui scheduler is also running in this environment.
The Torque scheduler is installed on the cluster, and accepts the same commands as the PBS Pro scheduler with a couple of exceptions. Currently we are experimenting with these packages so that we can provide improved scheduling to HPC users. While we are still learning about Torque and Maui, our experiences so far have been good and we are guardedly optimistic that Torque and Maui will end up being the resource manager and scheduler for the HPC Center sometime in the not-too-distant future.
Please note the following.
- If your job is single-threaded (1 cpu) and does not have heavy I/O requirements, it does not need to run on an infiniband-enabled node. In that case, you should include the "gige" property in your PBS resource specification as follows
#PBS -l nodes=1:ppn=1
- If you need to run an mpi-based application that has not been rebuilt for OpenMPI 1.2.0+Torque, please send us a note and we'll be happy to rebuild what you need - first come, first serve.
- If you build your own MPI-based application executables, you should use the MPI compiler wrappers (mpif90, mpicc, mpiCC) in /opt/intel/ompi/1.2.0/bin. These wrappers will automatically pull in the correct libraries.
- We will continue to tune the maui scheduler to provide fair and efficient scheduling according to the policies established by the HPC Committee and within the capabilities of the maui scheduler. Keep in mind that these policies include priority and quality-of-service commitments to those faculty who have invested in the resources within the HPC Center.
#! /bin/sh #PBS -N testjob #PBS -o testjob.out #PBS -e testjob.err #PBS -M <INSERT EMAIL HERE> #PBS -r n #PBS -l walltime=00:01:00 #PBS -l nodes=1:ppn=1 #PBS -l pmem=100mb date hostname module load python python -V
To submit this job from submit.hpc.ufl.edu, you would use the following command:
$ qsub <your job script>
To check the status of running jobs, you would use the following command:
$ qstat [-u <username>]
- More Sample Scripts for more information on PBS scripts.
- See Modules for more information on using the installed software via the environment modules system.
Notes on Batch Scripts
- The script can handle only one set of directives. Do not submit a script that has more than one set of directives included in it, as this will cause the Moab/Torque system to reject the script with a qsub: Job rejected by all possible destinations error. This problem was first seen when a user complained about a script that was being rejected with this error. Upon further inspection of their script, it was found that the script had concatenated versions of itself in the same file.
- For more info on advanced directives see PBS_Directives
- For a more detailed explanation of what is going on in a batch script
Troubleshooting Batch Scripts
- Ensure that you are using the preferred MPI application launcher. At the HPC Center,
mpiexecis the recommended and preferred launcher.
Compiling your own
By default, when you first login to the system you have compilers setup for the Intel OpenMPI compilers. This gives you access to C, C++, F77 and F90 compilers in your path, as well as the mpicc and mpif90 compilers for OpenMPI applications. If you want to change this, you can use the modules system to select a different compiler suite and MPI implementation.