This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
spark [2019/02/13 11:50] mantis |
spark [2019/04/24 12:55] mantis |
||
---|---|---|---|
Line 1: | Line 1: | ||
Unless noted otherwise, code is tested with Spark **2.2** | Unless noted otherwise, code is tested with Spark **2.2** | ||
+ | |||
+ | ====== Non-committal testdrive ====== | ||
+ | |||
+ | Minimum-effort way to test-drive Spark with a | ||
+ | [[https://databricks.com/spark/getting-started-with-apache-spark/quick-start#overview|Databricks tutorial]] (no local setup required) | ||
Line 117: | Line 122: | ||
{{:fg.png|}} | {{:fg.png|}} | ||
+ | |||
+ | ===== Submitting jobs ===== | ||
+ | |||
+ | ==== Providing spark jars ==== | ||
+ | https://spark.apache.org/docs/latest/running-on-yarn.html#preparations | ||
+ | |||
+ | How to setup provided jars (found [[https://mapr.com/docs/60/Spark/ConfigureSparkJARLocation_2.0.1.html|here]]): | ||
+ | |||
+ | <code bash> | ||
+ | cd /opt/spark-2.2.0-bin-hadoop2.7/jars | ||
+ | zip /opt/spark-2.2.0-bin-hadoop2.7/spark220-jars.zip ./* | ||
+ | # and then copy the archive to your HDFS | ||
+ | hdfs dfs -put /tmp/spark220-jars.zip /user/hdfs/</code> | ||
+ | |||
+ | |||
+ | Then you can make use of the provided archive by adding to spark-submit | ||
+ | |||
+ | <code> --conf spark.yarn.archive=hdfs:///user/hdfs/spark220-jars.zip </code> | ||
====== Testing ====== | ====== Testing ====== |