Difference between revisions of "Spark"

From UFRC
Jump to navigation Jump to search
Line 94: Line 94:
 
Spark supports interactive job submission through the interactive shells.
 
Spark supports interactive job submission through the interactive shells.
  
* Spark interactive shell in Scalar (spark-shell)
+
;Spark interactive shell in Scalar (spark-shell)
 
First, load spark module in the terminal where you want to submit a spark job.
 
First, load spark module in the terminal where you want to submit a spark job.
 
  module load spark
 
  module load spark
Line 104: Line 104:
 
  spark-shell --master $SPARK_MASTER
 
  spark-shell --master $SPARK_MASTER
  
* Spark interactive shell in Python (pyspark)
+
;Spark interactive shell in Python (pyspark)
 
Load spark module in the terminal where you want to submit a spark job.
 
Load spark module in the terminal where you want to submit a spark job.
 
  module load spark
 
  module load spark
Line 114: Line 114:
 
  pyspark --master $SPARK_MASTER
 
  pyspark --master $SPARK_MASTER
  
* Example: PI estimation via pyspark
+
;Example - PI estimation via pyspark
 
  SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)
 
  SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)
 
  pyspark --master $SPARK_MASTER
 
  pyspark --master $SPARK_MASTER
  
 +
;Example - Pi estimation from file with pyspark
  
 
# Pi estimation in pyspark
 
# Pi estimation from file with pyspark
 
* Spark-submit
 
# Pi estimation
 
# Wordcount
 
 
</div>
 
</div>
 
</div>
 
</div>
Line 134: Line 129:
 
Spark supports batch job submission through spark-submit.
 
Spark supports batch job submission through spark-submit.
  
 +
;Example - Pi estimation
 +
 +
;Example - Wordcount
  
 
</div>
 
</div>

Revision as of 20:59, 18 May 2018

Description

spark website  

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Environment Modules

Run module spider spark to find out what environment modules are available for this application.

System Variables

  • HPC_{{#uppercase:spark}}_DIR - installation directory
  • HPC_{{#uppercase:spark}}_BIN - executable directory
  • HPC_{{#uppercase:spark}}_SLURM - SLURM job script examples
  • SPARK_HOME - examples directory

Running Spark in HiperGator

To run your Spark jobs in HiperGator, first, a Spark cluster should be created in HiperGator via SLURM. This section shows a simple example how to create a Spark cluster in HiperGator and how to submit your Spark jobs into the cluster. For details about running Spark jobs in HiperGaotr, please refer to Spark Workshop. For Spark parameters used in this section, please refer to Spark's homepage.

Spark cluster in HiperGator

Expand this section to view instructions for creating a spark cluster in HiperGator.

It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section. Set SLURM parameters for Spark cluster.

#!/bin/bash
#filename: spark-local-cluster.sh
#SBATCH --job-name=spark_cluster
#SBATCH --nodes=1 # nodes allocated to the job
#SBATCH --cpus-per-task=16 # the number of CPUs allocated per task
#SBATCH --exclusive # not sharing of allocated nodes with other running jobs
#SBATCH --time=03:00:00
#SBATCH --output=spark_cluster.log
#SBATCH --error=spark_cluster.err
module load spark

Set Spark parameters for Spark cluster

export SPARK_LOCAL_DIRS=$HOME/spark/tmp
export SPARK_WORKER_DIR=$SPARK_LOCAL_DIRS
export SPARK_WORKER_CORES=$SLURM_CPUS_PER_TASK
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_NO_DAEMONIZE=true
export SPARK_LOG_DIR=$SPARK_LOCAL_DIRS
#export SPARK_CONF_DIR=$SPARK_LOCAL_DIRS
mkdir -p $SPARK_LOCAL_DIRS

Set Spark Master and Workers

MASTER_HOST=$(scontrol show hostname $SLURM_NODELIST | head -n 1)
export SPARK_MASTER_NODE=$(host $MASTER_HOST | head -1 | cut -d ' ' -f 4)
export MAX_SLAVES=$(expr $SLURM_JOB_NUM_NODES - 1)
# for starting spark master
$SPARK_HOME/sbin/start-master.sh & 
# use spark defaults for worker resources (all mem -1 GB, all cores) since using exclusive
#for starting spark worker
$SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT

Submit the SLURM job script to HiperGator

sbatch spark-local-cluster.sh

Check the Spark master launched.

grep "Starting Spark master" spark_cluster.err

This grep command above should end up with information like

18/03/13 14:53:23 INFO Master: Starting Spark master at spark://c29a-s42.ufhpc:7077

Check the Spark worker launched.

grep "Starting Spark worker" spark_cluster.err

This grep command above should end up with information like

18/03/13 14:53:24 INFO Worker: Starting Spark worker 172.16.194.59:42418 with 16 cores, 124.3 GB RAM

Spark interactive job

Expand this section to view instructions for starting preset applications without a job script.

Spark supports interactive job submission through the interactive shells.

Spark interactive shell in Scalar (spark-shell)

First, load spark module in the terminal where you want to submit a spark job.

module load spark

Get the location of the Spark master to connect to it through the interactive shell

SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)

Connect to the master using the Spark interactive shell in scalar

spark-shell --master $SPARK_MASTER
Spark interactive shell in Python (pyspark)

Load spark module in the terminal where you want to submit a spark job.

module load spark

Get the location of the Spark master to connect to it through the interactive shell

SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)

Connect to the master using the Spark interactive shell in scalar

pyspark --master $SPARK_MASTER
Example - PI estimation via pyspark
SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)
pyspark --master $SPARK_MASTER
Example - Pi estimation from file with pyspark

Spark batch job

Expand this section to view instructions for starting preset applications without a job script.

Spark supports batch job submission through spark-submit.

Example - Pi estimation
Example - Wordcount