Difference between revisions of "Spark"

From UFRC
Jump to navigation Jump to search
Line 30: Line 30:
  
 
==Running Spark in HiperGator==
 
==Running Spark in HiperGator==
To run your Spark jobs in HiperGator, first, a Spark cluster should be created in HiperGator via SLURM. This section shows a simple example how to create a Spark cluster in HiperGator and how to submit your Spark jobs into the cluster.  
+
To run your Spark jobs in HiperGator, first, a Spark cluster should be created in HiperGator via SLURM. This section shows a simple example how to create a Spark cluster in HiperGator and how to submit your Spark jobs into the cluster. For details about running Spark jobs in HiperGaotr, please refer to [https://help.rc.ufl.edu/doc/Spark_Workshop Spark Workshop]. For Spark parameters used in this section, please refer to [https://spark.apache.org/ Spark's homepage].
* Spark cluster in HiperGator
+
 
# Set SLURM parameters for Spark cluster
+
===Spark cluster in HiperGator===
# Set Spark parameters for Spark cluster
+
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
# Set Spark Master and Workers
+
''Expand this section to view instructions for creating a spark cluster in HiperGator.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
 +
It is assumed that the file name of the SLURM script is spark-cluster.sh for one-worker node Spark cluster.
 +
Set SLURM parameters for Spark cluster
 +
#!/bin/bash
 +
#filename: spark-cluster.sh
 +
#SBATCH --job-name=spark_cluster
 +
#SBATCH --nodes=1 # nodes allocated to the job
 +
#SBATCH --cpus-per-task=16 # the number of CPUs allocated per task
 +
#SBATCH --exclusive # not sharing of allocated nodes with other running jobs
 +
#SBATCH --time=03:00:00
 +
#SBATCH --output=spark_cluster.log
 +
#SBATCH --error=spark_cluster.err
 +
 
 +
module load spark
 +
 
 +
Set Spark parameters for Spark cluster
 +
export SPARK_LOCAL_DIRS=$HOME/spark/tmp
 +
export SPARK_WORKER_DIR=$SPARK_LOCAL_DIRS
 +
export SPARK_WORKER_CORES=$SLURM_CPUS_PER_TASK
 +
export SPARK_MASTER_PORT=7077
 +
export SPARK_MASTER_WEBUI_PORT=8080
 +
export SPARK_NO_DAEMONIZE=true
 +
export SPARK_LOG_DIR=$SPARK_LOCAL_DIRS
 +
#export SPARK_CONF_DIR=$SPARK_LOCAL_DIRS
 +
mkdir -p $SPARK_LOCAL_DIRS
 +
 
 +
Set Spark Master and Workers
 +
MASTER_HOST=$(scontrol show hostname $SLURM_NODELIST | head -n 1)
 +
export SPARK_MASTER_NODE=$(host $MASTER_HOST | head -1 | cut -d ' ' -f 4)
 +
export MAX_SLAVES=$(expr $SLURM_JOB_NUM_NODES - 1)
 +
# for starting spark master
 +
$SPARK_HOME/sbin/start-master.sh &
 +
# use spark defaults for worker resources (all mem -1 GB, all cores) since using exclusive
 +
#for starting spark worker
 +
$SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT
 +
 
 
# Submit the SLURM job script to HiperGator
 
# Submit the SLURM job script to HiperGator
 +
</div>
 +
</div>
 +
 +
===Submit Spark jobs===
 +
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;">
 +
''Expand this section to view instructions for starting preset applications without a job script.''
 +
<div class="mw-collapsible-content" style="padding: 5px;">
  
Spark interactive shells in Scalar and Python.
+
*Spark interactive shells in Scalar and Python
 
* Spark interactive shell in Scalar (spark-shell)
 
* Spark interactive shell in Scalar (spark-shell)
 
* Spark interactive shell in Python (pyspark)
 
* Spark interactive shell in Python (pyspark)
Line 45: Line 88:
 
# Pi estimation
 
# Pi estimation
 
# Wordcount  
 
# Wordcount  
 +
</div>
 +
</div>
  
 
<!--Configuration-->
 
<!--Configuration-->

Revision as of 20:16, 18 May 2018

Description

spark website  

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Environment Modules

Run module spider spark to find out what environment modules are available for this application.

System Variables

  • HPC_{{#uppercase:spark}}_DIR - installation directory
  • HPC_{{#uppercase:spark}}_BIN - executable directory
  • HPC_{{#uppercase:spark}}_SLURM - SLURM job script examples
  • SPARK_HOME - examples directory

Running Spark in HiperGator

To run your Spark jobs in HiperGator, first, a Spark cluster should be created in HiperGator via SLURM. This section shows a simple example how to create a Spark cluster in HiperGator and how to submit your Spark jobs into the cluster. For details about running Spark jobs in HiperGaotr, please refer to Spark Workshop. For Spark parameters used in this section, please refer to Spark's homepage.

Spark cluster in HiperGator

Expand this section to view instructions for creating a spark cluster in HiperGator.

It is assumed that the file name of the SLURM script is spark-cluster.sh for one-worker node Spark cluster. Set SLURM parameters for Spark cluster.

#!/bin/bash
#filename: spark-cluster.sh
#SBATCH --job-name=spark_cluster
#SBATCH --nodes=1 # nodes allocated to the job
#SBATCH --cpus-per-task=16 # the number of CPUs allocated per task
#SBATCH --exclusive # not sharing of allocated nodes with other running jobs
#SBATCH --time=03:00:00
#SBATCH --output=spark_cluster.log
#SBATCH --error=spark_cluster.err
module load spark

Set Spark parameters for Spark cluster

export SPARK_LOCAL_DIRS=$HOME/spark/tmp
export SPARK_WORKER_DIR=$SPARK_LOCAL_DIRS
export SPARK_WORKER_CORES=$SLURM_CPUS_PER_TASK
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_NO_DAEMONIZE=true
export SPARK_LOG_DIR=$SPARK_LOCAL_DIRS
#export SPARK_CONF_DIR=$SPARK_LOCAL_DIRS
mkdir -p $SPARK_LOCAL_DIRS

Set Spark Master and Workers

MASTER_HOST=$(scontrol show hostname $SLURM_NODELIST | head -n 1)
export SPARK_MASTER_NODE=$(host $MASTER_HOST | head -1 | cut -d ' ' -f 4)
export MAX_SLAVES=$(expr $SLURM_JOB_NUM_NODES - 1)
# for starting spark master
$SPARK_HOME/sbin/start-master.sh & 
# use spark defaults for worker resources (all mem -1 GB, all cores) since using exclusive
#for starting spark worker
$SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT
  1. Submit the SLURM job script to HiperGator

Submit Spark jobs

Expand this section to view instructions for starting preset applications without a job script.

  • Spark interactive shells in Scalar and Python
  • Spark interactive shell in Scalar (spark-shell)
  • Spark interactive shell in Python (pyspark)
  1. Pi estimation in pyspark
  2. Pi estimation from file with pyspark
  • Spark-submit
  1. Pi estimation
  2. Wordcount