Spark: Difference between revisions

From UFRC
Jump to navigation Jump to search
Giljael (talk | contribs)
No edit summary
Giljael (talk | contribs)
No edit summary
Line 36: Line 36:
''Expand this section to view instructions for creating a spark cluster in HiperGator.''
''Expand this section to view instructions for creating a spark cluster in HiperGator.''
<div class="mw-collapsible-content" style="padding: 5px;">
<div class="mw-collapsible-content" style="padding: 5px;">
It is assumed that the file name of the SLURM script is spark-cluster.sh for one-worker node Spark cluster.
It is assumed that spark-local-cluster.sh is the file name of the SLURM job script for one-worker node Spark cluster in this section.
Set SLURM parameters for Spark cluster.   
Set SLURM parameters for Spark cluster.   
  #!/bin/bash
  #!/bin/bash
  #filename: spark-cluster.sh
  #filename: spark-local-cluster.sh
  #SBATCH --job-name=spark_cluster
  #SBATCH --job-name=spark_cluster
  #SBATCH --nodes=1 # nodes allocated to the job
  #SBATCH --nodes=1 # nodes allocated to the job
Line 71: Line 71:
  $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT
  $SPARK_HOME/sbin/start-slave.sh spark://$SPARK_MASTER_NODE:$SPARK_MASTER_PORT


# Submit the SLURM job script to HiperGator
Submit the SLURM job script to HiperGator
sbatch spark-local-cluster.sh
 
 
</div>
</div>
</div>
</div>

Revision as of 20:22, 18 May 2018

Description

spark website  

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Environment Modules

Run module spider spark to find out what environment modules are available for this application.

System Variables

  • HPC_{{#uppercase:spark}}_DIR - installation directory
  • HPC_{{#uppercase:spark}}_BIN - executable directory
  • HPC_{{#uppercase:spark}}_SLURM - SLURM job script examples
  • SPARK_HOME - examples directory

Running Spark in HiperGator

To run your Spark jobs in HiperGator, first, a Spark cluster should be created in HiperGator via SLURM. This section shows a simple example how to create a Spark cluster in HiperGator and how to submit your Spark jobs into the cluster. For details about running Spark jobs in HiperGaotr, please refer to Spark Workshop. For Spark parameters used in this section, please refer to Spark's homepage.

Spark cluster in HiperGator

Expand this section to view instructions for creating a spark cluster in HiperGator.

Submit Spark jobs

Expand this section to view instructions for starting preset applications without a job script.