Shayokh144 / Spark_with_Python

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Apache Spark Intro

  • What is Apache Spark?

    Apache Spark is an open-source cluster-computing framework(wiki). It is a general-purpose Big data processing engine, suitable for use in a wide range of circumstances. It allows data workers to efficiently execute streaming, machine learning or SQL workloads that require fast iterative access to datasets. it works with the filesystem to distribute data across the cluster, and process that data in parallel. Spark jobs perform multiple operations consecutively, in memory and only spilling to disk when required by memory limitations.

  • Why Apache Spark??

  • Spark is not the alternative of Hadoop. It is the alternative of MapReduce

  • Spark Ecosystem

alt text

Spark with Python

Note: Scala is recomended for Spark, but Python is always good for quick learning.

Spark with Python Intro

Installation

  • Requirements
    • OS - Linux /Mac / Windows
    • Spark
    • Python
    • Scala, Java

Installation Instructions

Below instructions are applicable for Ubuntu 16.04

  • Here we use Spark 2.1.0 , for this perticular version we need to install Python3.5 to avoid any issue.
  • First we have to check the default Python version, to do this, open Terminal any run:
    • $ python3
      • if the version is 3.5 then it's ok, otherwise install Python3.5
  • Install pip3
    • $ sudo apt install python-3 pip
  • Install jupyter notebook
    • $ pip3 install jupyter
    • $ jypyter notebook
    • it will open jupyter notebook in our default browser, if it is not opend then click the link from terminal

alt text

  • Now we nedd JAVA. First check if it exists:

    • $ java -version
    • if it is preinstalled it will show something like this "openjdk version "1.8.0_151....etc""
  • If not installed previously:

    • $ sudo apt-get update
    • $ sudo apt-get install default-jre
    • $ java -version
  • Now install Scala

    • $ sudo apt-get install scala
    • $ scala -version
  • To connect python with scala and java we need to install py4j

    • $ pip3 install py4j

Now its time to install Spark

alt text

  • Now click to download "Download Spark: spark-2.1.0-bin-hadoop2.7.tgz"
  • After finishing download cut-paste spark-2.1.0-bin-hadoop2.7.tgz to "home" directory
    • goto "home" directory and open "terminal" and run following commands:
      • $ sudo tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz
      • $ export SPARK_HOME='home/asif/spark-2.1.0-bin-hadoop2.7'
        • write this path "home/asif/spark-2.1.0-bin-hadoop2.7" according to your pc
      • $ export PATH=$SPARK_HOME:$PATH
      • $ export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
      • $ export PYSPARK_DRIVER_PYTHON="jupyter"
      • $ export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
      • $ export PYSPARK_PYTHON=python3
      • $ sudo chmod 777 spark-2.1.0-bin-hadoop2.7
      • $ cd spark-2.1.0-bin-hadoop2.7/
      • $ cd python
      • $ python3
      • Last command will open python editor, type there "import pyspark" if it runs without error, then we are done!!
      • Now type "quit()" to get out......
  • if the above installation doesnt work and Python3.6 or other version with anaconda is preinstalled in the system, there are few more steps to go. In the terminal run following commands:
    • $ export PATH=~/anaconda3/bin:$PATH
    • $ conda create -n py35 python=3.5 anaconda
      • 'py35' is the name of the environment
    • To activate this Python3.5 env:
      • $ source activate py35
      • $ python3
      • import pyspark # it should work fine now
      • quit()
    • To deactivate :
      • source deactivate

Getting Started

About


Languages

Language:Jupyter Notebook 100.0%