Spark: Difference between revisions

From UFRC
Jump to navigation Jump to search
No edit summary
No edit summary
Line 43: Line 43:
It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section.
It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section.
Set SLURM parameters for Spark cluster. spark-local-cluster.sh is available on "Spark_Job_Scripts" page below.  
Set SLURM parameters for Spark cluster. spark-local-cluster.sh is available on "Spark_Job_Scripts" page below.  
<source lang=bash>
<pre>
  #!/bin/bash
  #!/bin/bash
  #filename: spark-local-cluster.sh
  #filename: spark-local-cluster.sh
Line 78: Line 78:
  ## for starting spark worker
  ## for starting spark worker
  $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT
  $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT
</source>
</pre>


Submit the SLURM job script to HiperGator
Submit the SLURM job script to HiperGator

Revision as of 21:20, 2 August 2021

Description

spark website  

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Environment Modules

Run module spider spark to find out what environment modules are available for this application.

System Variables

  • HPC_SPARK_DIR - installation directory
  • HPC_SPARK_BIN - executable directory
  • HPC_SPARK_SLURM - SLURM job script examples
  • SPARK_HOME - examples directory

Running Spark on HiperGator

To run your Spark jobs on HiperGator, two separate steps are required:

  1. Create a Spark cluster on HiperGator via SLURM. This section "Spark Cluster on HiPerGator" below shows a simple example how to create a Spark cluster on HiperGator.
  2. Submit your job to your Spark cluster. You can do this either interactively at the command line ("Spark Interactive Job" section below) or by submitting a a batch job ("Spark Batch Job" section below)

For details about running Spark jobs on HiPerGator, please refer to Spark Workshop. For Spark parameters used in this section, please refer to Spark's homepage.

Spark cluster on HiperGator

Expand this section to view instructions for creating a spark cluster in HiperGator.

Spark interactive job

Expand this section to view instructions for starting preset applications without a job script.

Spark batch job

Expand this section to view instructions for starting preset applications without a job script.


Job Script Examples

See the Spark_Job_Scripts page for spark Job script examples.