Difference between revisions of "Spark"
Line 74: | Line 74: | ||
sbatch spark-local-cluster.sh | sbatch spark-local-cluster.sh | ||
+ | Check the Spark master launched. | ||
+ | grep "Starting Spark master" spark_cluster.err | ||
+ | This grep command above should end up with information like | ||
+ | 18/03/13 14:53:23 INFO Master: Starting Spark master at spark://c29a-s42.ufhpc:7077 | ||
+ | |||
+ | Check the Spark worker launched. | ||
+ | grep "Starting Spark worker" spark_cluster.err | ||
+ | |||
+ | This grep command above should end up with information like | ||
+ | 18/03/13 14:53:24 INFO Worker: Starting Spark worker 172.16.194.59:42418 with 16 cores, 124.3 GB RAM | ||
</div> | </div> | ||
</div> | </div> | ||
− | === | + | ===Spark interactive job=== |
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;"> | <div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;"> | ||
''Expand this section to view instructions for starting preset applications without a job script.'' | ''Expand this section to view instructions for starting preset applications without a job script.'' | ||
<div class="mw-collapsible-content" style="padding: 5px;"> | <div class="mw-collapsible-content" style="padding: 5px;"> | ||
+ | Spark supports interactive job submission through the interactive shells. | ||
− | |||
* Spark interactive shell in Scalar (spark-shell) | * Spark interactive shell in Scalar (spark-shell) | ||
+ | First, load spark module in the terminal where you want to submit a spark job. | ||
+ | module load spark | ||
+ | |||
+ | Get the location of the Spark master to connect to it through the interactive shell | ||
+ | SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9) | ||
+ | |||
+ | Connect to the master using the Spark interactive shell in scalar | ||
+ | spark-shell --master $SPARK_MASTER | ||
+ | |||
* Spark interactive shell in Python (pyspark) | * Spark interactive shell in Python (pyspark) | ||
+ | Load spark module in the terminal where you want to submit a spark job. | ||
+ | module load spark | ||
+ | |||
+ | Get the location of the Spark master to connect to it through the interactive shell | ||
+ | SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9) | ||
+ | |||
+ | Connect to the master using the Spark interactive shell in scalar | ||
+ | pyspark --master $SPARK_MASTER | ||
+ | |||
+ | * Example: PI estimation via pyspark | ||
+ | SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9) | ||
+ | pyspark --master $SPARK_MASTER | ||
+ | |||
+ | |||
+ | |||
# Pi estimation in pyspark | # Pi estimation in pyspark | ||
# Pi estimation from file with pyspark | # Pi estimation from file with pyspark | ||
Line 91: | Line 125: | ||
# Pi estimation | # Pi estimation | ||
# Wordcount | # Wordcount | ||
+ | </div> | ||
+ | </div> | ||
+ | |||
+ | ===Spark batch job=== | ||
+ | <div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;"> | ||
+ | ''Expand this section to view instructions for starting preset applications without a job script.'' | ||
+ | <div class="mw-collapsible-content" style="padding: 5px;"> | ||
+ | Spark supports batch job submission through spark-submit. | ||
+ | |||
+ | |||
</div> | </div> | ||
</div> | </div> |
Revision as of 20:48, 18 May 2018
Description
Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.
Environment Modules
Run module spider spark
to find out what environment modules are available for this application.
System Variables
- HPC_{{#uppercase:spark}}_DIR - installation directory
- HPC_{{#uppercase:spark}}_BIN - executable directory
- HPC_{{#uppercase:spark}}_SLURM - SLURM job script examples
- SPARK_HOME - examples directory
Running Spark in HiperGator
To run your Spark jobs in HiperGator, first, a Spark cluster should be created in HiperGator via SLURM. This section shows a simple example how to create a Spark cluster in HiperGator and how to submit your Spark jobs into the cluster. For details about running Spark jobs in HiperGaotr, please refer to Spark Workshop. For Spark parameters used in this section, please refer to Spark's homepage.
Spark cluster in HiperGator
Expand this section to view instructions for creating a spark cluster in HiperGator.
It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section. Set SLURM parameters for Spark cluster.
#!/bin/bash #filename: spark-local-cluster.sh #SBATCH --job-name=spark_cluster #SBATCH --nodes=1 # nodes allocated to the job #SBATCH --cpus-per-task=16 # the number of CPUs allocated per task #SBATCH --exclusive # not sharing of allocated nodes with other running jobs #SBATCH --time=03:00:00 #SBATCH --output=spark_cluster.log #SBATCH --error=spark_cluster.err
module load spark
Set Spark parameters for Spark cluster
export SPARK_LOCAL_DIRS=$HOME/spark/tmp export SPARK_WORKER_DIR=$SPARK_LOCAL_DIRS export SPARK_WORKER_CORES=$SLURM_CPUS_PER_TASK export SPARK_MASTER_PORT=7077 export SPARK_MASTER_WEBUI_PORT=8080 export SPARK_NO_DAEMONIZE=true export SPARK_LOG_DIR=$SPARK_LOCAL_DIRS #export SPARK_CONF_DIR=$SPARK_LOCAL_DIRS mkdir -p $SPARK_LOCAL_DIRS
Set Spark Master and Workers
MASTER_HOST=$(scontrol show hostname $SLURM_NODELIST | head -n 1) export SPARK_MASTER_NODE=$(host $MASTER_HOST | head -1 | cut -d ' ' -f 4) export MAX_SLAVES=$(expr $SLURM_JOB_NUM_NODES - 1) # for starting spark master $SPARK_HOME/sbin/start-master.sh & # use spark defaults for worker resources (all mem -1 GB, all cores) since using exclusive #for starting spark worker $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT
Submit the SLURM job script to HiperGator
sbatch spark-local-cluster.sh
Check the Spark master launched.
grep "Starting Spark master" spark_cluster.err
This grep command above should end up with information like
18/03/13 14:53:23 INFO Master: Starting Spark master at spark://c29a-s42.ufhpc:7077
Check the Spark worker launched.
grep "Starting Spark worker" spark_cluster.err
This grep command above should end up with information like
18/03/13 14:53:24 INFO Worker: Starting Spark worker 172.16.194.59:42418 with 16 cores, 124.3 GB RAM
Spark interactive job
Expand this section to view instructions for starting preset applications without a job script.
Spark supports interactive job submission through the interactive shells.
- Spark interactive shell in Scalar (spark-shell)
First, load spark module in the terminal where you want to submit a spark job.
module load spark
Get the location of the Spark master to connect to it through the interactive shell
SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)
Connect to the master using the Spark interactive shell in scalar
spark-shell --master $SPARK_MASTER
- Spark interactive shell in Python (pyspark)
Load spark module in the terminal where you want to submit a spark job.
module load spark
Get the location of the Spark master to connect to it through the interactive shell
SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9)
Connect to the master using the Spark interactive shell in scalar
pyspark --master $SPARK_MASTER
- Example: PI estimation via pyspark
SPARK_MASTER=$(grep "Starting Spark master" *.err | cut -d " " -f 9) pyspark --master $SPARK_MASTER
- Pi estimation in pyspark
- Pi estimation from file with pyspark
- Spark-submit
- Pi estimation
- Wordcount
Spark batch job
Expand this section to view instructions for starting preset applications without a job script.
Spark supports batch job submission through spark-submit.