Elliott-dev / Big-Data-Challenge--Amazon-Shoppers-Product-Reviews

In this assignment I will put my ETL skills to the test. Many of Amazon's shoppers depend on product reviews to make a purchase. Amazon makes these datasets publicly available. However, they are quite large and can exceed the capacity of local machines to handle. One dataset alone contains over 1.5 million rows; with over 40 datasets, this can be quite taxing on the average local computer. My first goal for this project will be to perform the ETL process completely in the cloud and upload a DataFrame to an RDS instance. The second goal will be to use PySpark or SQL to perform a statistical analysis of selected data.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Big-Data-Challenge: Amazon-Shoppers-Product-Reviews

Background

In this Project I will put my ETL skills to the test. Many of Amazon's shoppers depend on product reviews to make a purchase. Amazon makes these datasets publicly available. However, they are quite large and can exceed the capacity of local machines to handle. One dataset alone contains over 1.5 million rows; with over 40 datasets, this can be quite taxing on the average local computer. My first goal for this project will be to perform the ETL process completely in the cloud and upload a DataFrame to an RDS instance. The second goal will be to use PySpark or SQL to perform a statistical analysis of selected data.

There are two levels to this project. The second level is optional.

  1. Create DataFrames to match production-ready tables from two big Amazon customer review datasets.
  2. Analyze whether reviews from Amazon's Vine program are trustworthy.

Instructions

image

Level 1

  • Used the furnished schema to create tables in my RDS database.

  • Created two separate Google Colab notebooks and Extracted any two datasets from the list at review dataset, one into each notebook.

image

Note: It is possible to ETL both data sources in a single notebook, but due to the large data sizes, it will be easier to work with these S3 data sources in two separate Colab notebooks.

  • Made sure to handle the header correctly. If I read the file without the header parameter, I may have found that the column headers are included in the table rows.

  • For each notebook (one dataset per notebook), completed the following:

    • Counted the number of records (rows) in the dataset.

    • Transformed the dataset to fit the tables in the schema file. Made sure the DataFrames matched in data type and in column name.

    • Loaded the DataFrames that correspond to tables into an RDS instance. Note: This process can take up to 10 minutes for each. Made sure that everything is correct before uploading.

image

In Amazon's Vine program, reviewers receive free products in exchange for reviews.

vine01.png

Amazon has several policies to reduce the bias of its Vine reviews: https://www.amazon.com/gp/vine/help?ie=UTF8.

But are Vine reviews truly trustworthy? My task would be to investigate whether Vine reviews are free of bias. Using either PySpark or—for an extra challenge—SQL to analyze the data.

  • If I choose to use SQL, first would of used Spark on Colab to extract and transform the data and load it into a SQL table on your RDS account. Performing my analysis with SQL queries on RDS.

  • While there are no hard requirements for the analysis, would have had to consider my steps to reduce noisy data, e.g., filtering for reviews that meet a certain number of helpful votes, total votes, or both.

References

Amazon customer Reviews Dataset. (n.d.). Retrieved April 08, 2021, from: https://s3.amazonaws.com/amazon-reviews-pds/readme.html


About

In this assignment I will put my ETL skills to the test. Many of Amazon's shoppers depend on product reviews to make a purchase. Amazon makes these datasets publicly available. However, they are quite large and can exceed the capacity of local machines to handle. One dataset alone contains over 1.5 million rows; with over 40 datasets, this can be quite taxing on the average local computer. My first goal for this project will be to perform the ETL process completely in the cloud and upload a DataFrame to an RDS instance. The second goal will be to use PySpark or SQL to perform a statistical analysis of selected data.


Languages

Language:Jupyter Notebook 100.0%