jamesbconner / data-scientists-guide-apache-spark

Best practices of using Spark for practicing data scientists in the context of a data scientist’s standard workflow.

Home Page:http://jay-oh-en.github.io/data-scientists-guide-apache-spark

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The Data Scientist's Guide to Apache Spark

Join the chat at https://gitter.im/Jay-Oh-eN/data-scientists-guide-apache-spark Binder

This repo contains notebook exercises for a workshop teaching the best practices of using Spark for practicing data scientists in the context of a data scientist’s standard workflow. By leveraging Spark’s APIs for Python and R to present practical applications, the technology will be much more accessible by decreasing the barrier to entry.

Materials

For the workshop (and after) we will use a Gitter chatroom to keep the conversation going: https://gitter.im/Jay-Oh-eN/data-scientists-guide-apache-spark.

And/or please do not hesitate to reach out to me directly via email at jonathan@galvanize.com or over twitter @clearspandex

The presentation can be found on Slideshare here.

Prerequisites

Prior experience with Python and the scientific Python stack is beneficial. Also knowledge of data science models and applications is preferred. This will not be an introduction to Machine Learning or Data Science, but rather a course for people proficient in these methods on a small scale to understand how to apply that knowledge in a distributed setting with Spark.

Setup

SparkR with a Notebook

  1. Install IRKernel
install.packages(c('rzmq','repr','IRkernel','IRdisplay'), repos = c('http://irkernel.github.io/', getOption('repos')))

IRkernel::installspec()
  1. Set environment variables:
# Example: Set this to where Spark is installed
Sys.setenv(SPARK_HOME="/Users/[username]/spark")

# This line loads SparkR from the installed directory
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))

# if these two lines work, you are all set
library(SparkR)
sc <- sparkR.init(master="local")

Data

link = 'http://hopelessoptimism.com/static/data/airline-data'

The notebooks use a few datasets. For the DonorsChoose data, you can read the documentation here and download a zip (~0.5 gb) from: http://hopelessoptimism.com/static/data/donors_choose.zip

IPython Console Help

Q: How can I find out all the methods that are available on DataFrame?

  • In the IPython console type sales.[TAB]

  • Autocomplete will show you all the methods that are available.

  • To find more information about a specific method, say .cov type help(sales.cov)

  • This will display the API documentation for that method.

Spark Documentation

Q: How can I find out more about Spark's Python API, MLlib, GraphX, Spark Streaming, deploying Spark to EC2?

  • Go to https://spark.apache.org/docs/latest

  • Navigate using tabs to the following areas in particular.

  • Programming Guide > Quick Start, Spark Programming Guide, Spark Streaming, DataFrames and SQL, MLlib, GraphX, SparkR.

  • Deploying > Overview, Submitting Applications, Spark Standalone, YARN, Amazon EC2.

  • More > Configuration, Monitoring, Tuning Guide.

References

Setup

History of Computing

Original Papers

Data Science with Spark

Distributed Computing

Spark Internals

Spark Performance

Spark Deployment

Plotly + Spark

word2Vec

The word2vec tool takes a text corpus as input and produces the word vectors as output. It first constructs a vocabulary from the training text data and then learns vector representation of words. The resulting word vector file can be used as features in many natural language processing and machine learning applications.

Theory/Application

Tools

Books on Spark

Learning Scala

Video Tutorials

Community

About

Best practices of using Spark for practicing data scientists in the context of a data scientist’s standard workflow.

http://jay-oh-en.github.io/data-scientists-guide-apache-spark


Languages

Language:Jupyter Notebook 100.0%