Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.
Environment Modules
Run module spider spark to find out what environment modules are available for this application.
To run your Spark jobs in HiperGator, first, a Spark cluster should be created in HiperGator via SLURM. This section shows a simple example how to create a Spark cluster in HiperGator and how to submit your Spark jobs into the cluster. For details about running Spark jobs in HiperGaotr, please refer to Spark Workshop. For Spark parameters used in this section, please refer to Spark's homepage.
Spark cluster in HiperGator
Expand this section to view instructions for creating a spark cluster in HiperGator.
It is assumed that the file name of the SLURM script is spark-cluster.sh for one-worker node Spark cluster.
Set SLURM parameters for Spark cluster.
#!/bin/bash
#filename: spark-cluster.sh
#SBATCH --job-name=spark_cluster
#SBATCH --nodes=1 # nodes allocated to the job
#SBATCH --cpus-per-task=16 # the number of CPUs allocated per task
#SBATCH --exclusive # not sharing of allocated nodes with other running jobs
#SBATCH --time=03:00:00
#SBATCH --output=spark_cluster.log
#SBATCH --error=spark_cluster.err