Spark: Difference between revisions
Moskalenko (talk | contribs) m Text replacement - "#uppercase" to "uc" |
No edit summary |
||
(3 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
[[Category:Software]][[Category: | [[Category:Software]][[Category:Utility]] | ||
{|<!--CONFIGURATION: REQUIRED--> | {|<!--CONFIGURATION: REQUIRED--> | ||
|{{#vardefine:app|spark}} | |{{#vardefine:app|spark}} | ||
Line 30: | Line 30: | ||
==Running Spark on HiperGator== | ==Running Spark on HiperGator== | ||
To run your Spark jobs on HiperGator, | To run your Spark jobs on HiperGator, two separate steps are required: | ||
# Create a Spark cluster on HiperGator via SLURM. This section "Spark Cluster on HiPerGator" below shows a simple example how to create a Spark cluster on HiperGator. | |||
# Submit your job to your Spark cluster. You can do this either interactively at the command line ("Spark Interactive Job" section below) or by submitting a a batch job ("Spark Batch Job" section below) | |||
For details about running Spark jobs on HiPerGator, please refer to [https://help.rc.ufl.edu/doc/Spark_Workshop Spark Workshop]. For Spark parameters used in this section, please refer to [https://spark.apache.org/ Spark's homepage]. | |||
===Spark cluster on HiperGator=== | ===Spark cluster on HiperGator=== | ||
Line 38: | Line 43: | ||
It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section. | It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section. | ||
Set SLURM parameters for Spark cluster. spark-local-cluster.sh is available on "Spark_Job_Scripts" page below. | Set SLURM parameters for Spark cluster. spark-local-cluster.sh is available on "Spark_Job_Scripts" page below. | ||
< | <pre> | ||
#!/bin/bash | #!/bin/bash | ||
#filename: spark-local-cluster.sh | #filename: spark-local-cluster.sh | ||
Line 73: | Line 78: | ||
## for starting spark worker | ## for starting spark worker | ||
$SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT | $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT | ||
</ | </pre> | ||
Submit the SLURM job script to HiperGator | Submit the SLURM job script to HiperGator | ||
Line 172: | Line 177: | ||
<!--Job Scripts--> | <!--Job Scripts--> | ||
{{#if: {{#var: job}}|==Job Script Examples== | {{#if: {{#var: job}}|==Job Script Examples== | ||
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;"> | |||
''Expand this section to view spark-local-cluster.sh'' | |||
<div class="mw-collapsible-content" style="padding: 5px;"> | |||
<source lang=bash> | |||
#!/bin/bash | |||
#filename: spark-local-cluster.sh | |||
#SBATCH --job-name=spark_cluster | |||
#SBATCH --nodes=1 # nodes allocated to the job | |||
#SBATCH --cpus-per-task=16 # the number of CPUs allocated per task | |||
#SBATCH --exclusive # not sharing of allocated nodes with other running jobs | |||
#SBATCH --time=03:00:00 | |||
#SBATCH --output=spark_cluster.log | |||
#SBATCH --error=spark_cluster.err | |||
###SBATCH --ntasks= # tasks to be created for the job | |||
###SBATCH --ntasks-per-core= # max number of tasks per allocated core | |||
###SBATCH --ntasks-per-node= # max number of tasks per allocated node | |||
###SBATCH --mail-type=END,FAIL | |||
###SBATCH --mail-user=<yourID>@ufl.edu | |||
module load spark | |||
### Set Spark variables | |||
export SPARK_LOCAL_DIRS=$HOME/spark/tmp | |||
export SPARK_WORKER_DIR=$SPARK_LOCAL_DIRS | |||
export SPARK_WORKER_CORES=$SLURM_CPUS_PER_TASK | |||
export SPARK_MASTER_PORT=7077 | |||
export SPARK_MASTER_WEBUI_PORT=8080 | |||
export SPARK_NO_DAEMONIZE=true | |||
export SPARK_LOG_DIR=$SPARK_LOCAL_DIRS | |||
#export SPARK_CONF_DIR=$SPARK_LOCAL_DIRS | |||
mkdir -p $SPARK_LOCAL_DIRS | |||
MASTER_HOST=$(scontrol show hostname $SLURM_NODELIST | head -n 1) | |||
export SPARK_MASTER_NODE=$(host $MASTER_HOST | head -1 | cut -d ' ' -f 4) | |||
export MAX_SLAVES=$(expr $SLURM_JOB_NUM_NODES - 1) | |||
# start master | |||
$SPARK_HOME/sbin/start-master.sh & | |||
# start workers | |||
# use spark defaults for worker resources (all mem -1 GB, all cores) since using exclusive | |||
$SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT | |||
</source> | |||
</div> | |||
</div> | |||
<div class="mw-collapsible mw-collapsed" style="width:70%; padding: 5px; border: 1px solid gray;"> | |||
''Expand this section to view pi_with_pythonstartup.py'' | |||
<div class="mw-collapsible-content" style="padding: 5px;"> | |||
<source lang=python> | |||
from operator import add | |||
from random import random | |||
partitions =10 | |||
n = 100000 * partitions | |||
def f(_): | |||
x = random() * 2 - 1 | |||
y = random() * 2 - 1 | |||
return 1 if x ** 2 + y ** 2 <= 1 else 0 | |||
count = sc.parallelize(range(1, n + 1), partitions).map(f).reduce(add) | |||
print("Pi is roughly %f" % (4.0 * count / n)) | |||
</source> | |||
</div> | |||
</div> | |||
|}} | |}} | ||
<!--Policy--> | <!--Policy--> |
Latest revision as of 22:01, 15 December 2022
Description
Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.
Environment Modules
Run module spider spark
to find out what environment modules are available for this application.
System Variables
- HPC_SPARK_DIR - installation directory
- HPC_SPARK_BIN - executable directory
- HPC_SPARK_SLURM - SLURM job script examples
- SPARK_HOME - examples directory
Running Spark on HiperGator
To run your Spark jobs on HiperGator, two separate steps are required:
- Create a Spark cluster on HiperGator via SLURM. This section "Spark Cluster on HiPerGator" below shows a simple example how to create a Spark cluster on HiperGator.
- Submit your job to your Spark cluster. You can do this either interactively at the command line ("Spark Interactive Job" section below) or by submitting a a batch job ("Spark Batch Job" section below)
For details about running Spark jobs on HiPerGator, please refer to Spark Workshop. For Spark parameters used in this section, please refer to Spark's homepage.
Spark cluster on HiperGator
Expand this section to view instructions for creating a spark cluster in HiperGator.
Spark interactive job
Expand this section to view instructions for starting preset applications without a job script.
Spark batch job
Expand this section to view instructions for starting preset applications without a job script.
Job Script Examples
Expand this section to view spark-local-cluster.sh
Expand this section to view pi_with_pythonstartup.py