Difference between revisions of "Getting Started"

From UFRC
Jump to navigation Jump to search
Line 29: Line 29:
  
 
To use HiPerGator or HiPerGator-AI you need three basic parts
 
To use HiPerGator or HiPerGator-AI you need three basic parts
 
+
===[[HPG Interfaces | Interfaces]]===
[[HPG Interfaces | '''Interfaces''']]
 
 
 
 
You use Interfaces to interact with the system, manage data, initialize computation, and view the results. The main categories of interfaces 'Command-Line' also known as Terminal, Graphical User Interfaces, and Web Interfaces or applications for more specialized use. Some distinctions here are blurred because, for example, you can open a Terminal while using a Web Interface like [[Jupyter#JupyterHub|JupyterHub]] or [[Open OnDemand]], but mostly you use a command-line Terminal interface through SSH connections (see below).
 
You use Interfaces to interact with the system, manage data, initialize computation, and view the results. The main categories of interfaces 'Command-Line' also known as Terminal, Graphical User Interfaces, and Web Interfaces or applications for more specialized use. Some distinctions here are blurred because, for example, you can open a Terminal while using a Web Interface like [[Jupyter#JupyterHub|JupyterHub]] or [[Open OnDemand]], but mostly you use a command-line Terminal interface through SSH connections (see below).
 
+
===[[HPG Data Management | Data Management]]===
[[HPG Data Management | '''Data Management''']]
 
 
 
 
To perform research analyses you need to [[Transfer_Data|upload]] and [[Storage|manage]] data. Note that misuse of the storage systems is the second main reason for account suspension after running analyses on login nodes.
 
To perform research analyses you need to [[Transfer_Data|upload]] and [[Storage|manage]] data. Note that misuse of the storage systems is the second main reason for account suspension after running analyses on login nodes.
 
+
===[[HPG Computation | HPG Computation]]===
[[HPG Computation | '''HPG Computation''']]
 
 
 
 
'''Warning:''' do not run full-scale (normal) analyses on login nodes. [[Development and Testing]] is required reading. The main approach to run computational analyses is through writing [[Sample SLURM Scripts|job scripts]] and sending them to the [[SLURM_Commands|scheduler]] to run. Some interfaces like [[Open OnDemand]], [[Jupyter#JupyterHub|JupyterHub]], and [[Galaxy]] can manage job scheduling behind the scenes and may be more convenient than job submission from the command-line when appropriate.
 
'''Warning:''' do not run full-scale (normal) analyses on login nodes. [[Development and Testing]] is required reading. The main approach to run computational analyses is through writing [[Sample SLURM Scripts|job scripts]] and sending them to the [[SLURM_Commands|scheduler]] to run. Some interfaces like [[Open OnDemand]], [[Jupyter#JupyterHub|JupyterHub]], and [[Galaxy]] can manage job scheduling behind the scenes and may be more convenient than job submission from the command-line when appropriate.
  

Revision as of 16:11, 4 April 2023


Welcome to UF Research Computing! This page is intended to help new clients get started on HiPerGator.

Note that new users are highly encouraged to take the New User Training course to familiarize themselves with using a supercomputer for research, save time, and avoid many beginner mistakes.

From Zero to HiPerGator

Initial Consult

If a face-to-face discussion about the group's needs is needed you can meet one of the UF Research Computing Facilitators face-to-face or virtually or submit a support request to start the conversation.

HiPerGator Accounts

Group's sponsor has to be the first person to request a HiPerGator account indicating that they are a new sponsor. In the process we will create their sponsored group.

Afterwards, group members will be able to submit HiPerGator account requests indicating their PI as the sponsor. Once approved, their linux accounts will be created.

Trial Allocation

We recommend that the group's sponsor request a free trial allocation for storage and computational resources to get the group started on HiPerGator. Group members can then use HiPerGator for the 3 month duration of the trial allocation to figure out what resources and applications they really need.

Purchasing Resources

After or while the group uses a trial allocation to determine the computational and storage resources it needs the group's sponsor can submit a purchase request for hardware (5-years) or services (3-months to longer) to invest into the resources to cover the group's HiPerGator use.

Some groups may have access to shared departmental allocations. In this case, instead of purchasing resources, group members can request to be added to the departmental group to gain access to the shared resources.

Some examples of departments with shared allocations include the Genetics Institute, Emerging Pathogens Institute, Statistics Department, Biostatistics Department, Center for Compressible Multiphase Turbulence (CCMT), Cognitive Aging and Memory Clinical Translational Research Program (CAMCTRP), Center for Molecular Magnetic Quantum Materials, Physics Department, and Plant Pathology Department. In addition, several research groups working on collaborative projects have shared allocations accessible to members of those projects.

At this point a group is established on HiPerGator and can continue their computational work. See below for more details on the basic use.

Introduction to Using HiPerGator

To use HiPerGator or HiPerGator-AI you need three basic parts

Interfaces

You use Interfaces to interact with the system, manage data, initialize computation, and view the results. The main categories of interfaces 'Command-Line' also known as Terminal, Graphical User Interfaces, and Web Interfaces or applications for more specialized use. Some distinctions here are blurred because, for example, you can open a Terminal while using a Web Interface like JupyterHub or Open OnDemand, but mostly you use a command-line Terminal interface through SSH connections (see below).

Data Management

To perform research analyses you need to upload and manage data. Note that misuse of the storage systems is the second main reason for account suspension after running analyses on login nodes.

HPG Computation

Warning: do not run full-scale (normal) analyses on login nodes. Development and Testing is required reading. The main approach to run computational analyses is through writing job scripts and sending them to the scheduler to run. Some interfaces like Open OnDemand, JupyterHub, and Galaxy can manage job scheduling behind the scenes and may be more convenient than job submission from the command-line when appropriate.

For more information about on how to get started using HiPerGator visit our Wiki category Essentials, where you can find additional instructions, training, and tutorial videos.

Getting help

If you are having problems using the UFRC system, try our resources at Help or let our staff know by submitting a support request.