Difference between revisions of "Spark"

From UFRC
Jump to navigation Jump to search
Line 38: Line 38:
 
It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section.
 
It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section.
 
Set SLURM parameters for Spark cluster.  
 
Set SLURM parameters for Spark cluster.  
 
+
<source lang=bash>
 
  #!/bin/bash
 
  #!/bin/bash
 
  #filename: spark-local-cluster.sh
 
  #filename: spark-local-cluster.sh
Line 50: Line 50:
  
 
  module load spark
 
  module load spark
 +
</source>
  
 
Set Spark parameters for Spark cluster
 
Set Spark parameters for Spark cluster
 +
<source lang=bash>
 
  export SPARK_LOCAL_DIRS=$HOME/spark/tmp
 
  export SPARK_LOCAL_DIRS=$HOME/spark/tmp
 
  export SPARK_WORKER_DIR=$SPARK_LOCAL_DIRS
 
  export SPARK_WORKER_DIR=$SPARK_LOCAL_DIRS
Line 61: Line 63:
 
  #export SPARK_CONF_DIR=$SPARK_LOCAL_DIRS
 
  #export SPARK_CONF_DIR=$SPARK_LOCAL_DIRS
 
  mkdir -p $SPARK_LOCAL_DIRS
 
  mkdir -p $SPARK_LOCAL_DIRS
 +
</source>
  
 
Set Spark Master and Workers
 
Set Spark Master and Workers
 +
<source lang=bash>
 
  MASTER_HOST=$(scontrol show hostname $SLURM_NODELIST | head -n 1)
 
  MASTER_HOST=$(scontrol show hostname $SLURM_NODELIST | head -n 1)
 
  export SPARK_MASTER_NODE=$(host $MASTER_HOST | head -1 | cut -d ' ' -f 4)
 
  export SPARK_MASTER_NODE=$(host $MASTER_HOST | head -1 | cut -d ' ' -f 4)
Line 71: Line 75:
 
  #for starting spark worker
 
  #for starting spark worker
 
  $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT
 
  $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT
 +
</source>
  
 
Submit the SLURM job script to HiperGator
 
Submit the SLURM job script to HiperGator

Revision as of 18:53, 22 May 2018

Description

spark website  

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Environment Modules

Run module spider spark to find out what environment modules are available for this application.

System Variables

  • HPC_{{#uppercase:spark}}_DIR - installation directory
  • HPC_{{#uppercase:spark}}_BIN - executable directory
  • HPC_{{#uppercase:spark}}_SLURM - SLURM job script examples
  • SPARK_HOME - examples directory

Running Spark on HiperGator

To run your Spark jobs in HiperGator, first, a Spark cluster should be created in HiperGator via SLURM. This section shows a simple example how to create a Spark cluster on HiperGator and how to submit your Spark jobs into the cluster. For details about running Spark jobs in HiperGaotr, please refer to Spark Workshop. For Spark parameters used in this section, please refer to Spark's homepage.

Spark cluster on HiperGator

Expand this section to view instructions for creating a spark cluster in HiperGator.

It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section. Set SLURM parameters for Spark cluster.

 #!/bin/bash
 #filename: spark-local-cluster.sh
 #SBATCH --job-name=spark_cluster
 #SBATCH --nodes=1 # nodes allocated to the job
 #SBATCH --cpus-per-task=16 # the number of CPUs allocated per task
 #SBATCH --exclusive # not sharing of allocated nodes with other running jobs
 #SBATCH --time=03:00:00
 #SBATCH --output=spark_cluster.log
 #SBATCH --error=spark_cluster.err

 module load spark

Set Spark parameters for Spark cluster

 export SPARK_LOCAL_DIRS=$HOME/spark/tmp
 export SPARK_WORKER_DIR=$SPARK_LOCAL_DIRS
 export SPARK_WORKER_CORES=$SLURM_CPUS_PER_TASK
 export SPARK_MASTER_PORT=7077
 export SPARK_MASTER_WEBUI_PORT=8080
 export SPARK_NO_DAEMONIZE=true
 export SPARK_LOG_DIR=$SPARK_LOCAL_DIRS
 #export SPARK_CONF_DIR=$SPARK_LOCAL_DIRS
 mkdir -p $SPARK_LOCAL_DIRS

Set Spark Master and Workers

 MASTER_HOST=$(scontrol show hostname $SLURM_NODELIST | head -n 1)
 export SPARK_MASTER_NODE=$(host $MASTER_HOST | head -1 | cut -d ' ' -f 4)
 export MAX_SLAVES=$(expr $SLURM_JOB_NUM_NODES - 1)
 # for starting spark master
 $SPARK_HOME/sbin/start-master.sh & 
 # use spark defaults for worker resources (all mem -1 GB, all cores) since using exclusive
 #for starting spark worker
 $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT

Submit the SLURM job script to HiperGator

sbatch spark-local-cluster.sh

Check the Spark master launched.

grep "Starting Spark master" spark_cluster.err

This grep command above should end up with information like

18/03/13 14:53:23 INFO Master: Starting Spark master at spark://c29a-s42.ufhpc:7077

Check the Spark worker launched.

grep "Starting Spark worker" spark_cluster.err

This grep command above should end up with information like

18/03/13 14:53:24 INFO Worker: Starting Spark worker 172.16.194.59:42418 with 16 cores, 124.3 GB RAM

Spark interactive job

Expand this section to view instructions for starting preset applications without a job script.

Spark supports interactive job submission through the interactive shells.

Spark interactive shell in Scalar (spark-shell)

First, load spark module in the terminal where you want to submit a spark job.

module load spark

Get the location of the Spark master to connect to it through the interactive shell

SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)

Connect to the master using the Spark interactive shell in scalar

spark-shell --master $SPARK_MASTER
Spark interactive shell in Python (pyspark)

Load spark module in the terminal where you want to submit a spark job.

module load spark

Get the location of the Spark master to connect to it through the interactive shell

SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)

Connect to the master using the Spark interactive shell in scalar

pyspark --master $SPARK_MASTER
Example - PI estimation via pyspark
SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)
pyspark --master $SPARK_MASTER
Example - Pi estimation from file with pyspark

Spark batch job

Expand this section to view instructions for starting preset applications without a job script.

Spark supports batch job submission through spark-submit.

Example - Pi estimation
Example - Wordcount


Job Script Examples

See the Spark_Job_Scripts page for spark Job script examples.