jsbroks / ws-data-spark

work sample for data aspect, Apache Spark variant

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Environment

Unless you already have a working Apache Spark cluster, you will need to have Docker for simple environment setup.

The provided docker-compose.yml and Spark configurations in conf directory are cloned from https://github.com/gettyimages/docker-spark.

Setup

  1. Make sure Docker is installed properly and docker-compose is ready to use
  2. Run $ docker-compose up -d under the data-mr directory
  3. Check Spark UI at http://localhost:8080 and you should see 1 master and 1 worker
  4. Run $ docker exec -it datamr_master_1 /bin/bash to get into the container shell, and start utilizing Spark commands such as # spark-shell, # pyspark or # spark-submit. You may want to replace datamr_master_1 with actual container name that's spawned by the docker-compose process

demo.gif

Notes on working through the problems

If you're not already familiar with Apache Spark, you'll need to go through its documentation for available APIs. The version that comes with the Docker Spark setup depends on https://github.com/gettyimages/docker-spark.

For jobs that rely on external dependencies and libraries, make sure they are properly packaged on submission.

On submission, we will need:

  1. Source code of the solution
  2. Build instructions for job packaging (unless your solution is a single .py), such as Maven or SBT for Scala/Java, or setup.py for Python .zip/.egg

Make sure the jobs can be submitted (through spark-submit command) in the Spark Master container shell. There is a data directory provided that maps between the Spark Master container and your host system, which is accessible as /tmp/data within the Docker container -- this is where you want to place both your jobs and work sample data, the latter is already included.

About

work sample for data aspect, Apache Spark variant