There are 1 repository under spark-dataframes topic.
PySpark-Tutorial provides basic algorithms using PySpark
Plain Stock Close-Price Prediction via Graves LSTM RNNs
Big Data Modeling, MapReduce, Spark, PySpark @ Santa Clara University
Data cleaning, pre-processing, and Analytics on a million movies using Spark and Scala.
Apache Spark is a fast, in-memory data processing engine with elegant and expressive development API's to allow data workers to efficiently execute streaming, machine learning or SQL workloads that require fast iterative access to datasets.This project will have sample programs for Spark in Scala language .
This repository contains Spark, MLlib, PySpark and Dataframes projects
Various data stream/batch process demo with Apache Scala Spark 🚀
Create Data Lake on AWS S3 to store dimensional tables after processing data using Spark on AWS EMR cluster
Apache Spark Basics - Java Examples
A library having Java and Scala examples for Spark 2.x
Spark BigQuery Parallel
This project utilizes PySpark DataFrames and PySpark RDD to implement item-based collaborative filtering. By calculating cosine similarity scores or identifying movies with the highest number of shared viewers, the system recommends 10 similar movies for a given target movie that aligns users’ preferences.
Use this project to join data from multiple csv files. Currently in this project we support one to one and one to many join. Along with this you can find how to use kafka producer efficiently with spark.
Data Science and Engineering project - Programming for Big Data @ Simon Fraser University (SFU)
Big Data - Split a large CSV file into N smaller ones and save them into the local disk
Predict the success of Kickstarter campaigns using machine learning. Analyze project data including financial goals, pledge amounts, categories, and outcomes. Perform data cleaning, queries, visualizations, and build models to forecast campaign success, helping entrepreneurs optimize their funding strategies
Implementation of Hadoop and Spark
Predict Current Property Investment opportunities using Data Analysis (Big Data Spark ML)
This is our final project for SFU's CMPT 353 taught by Greg Baker during Summer 2023
spark hadoop exercise of cloud computing course - aut 1402-1403 fall
Pyspark serves as a Python interface to Apache Spark, enabling the execution of Python and SQL-like instructions for the manipulation and analysis of data within a distributed processing framework.
Repository for Spark structured streaming use case implementations.
Calculate user sessions & stats on top of them for imaginary ecom site using Spark sql & aggregations
This repository contains the implementation of a wide variety of BigData Projects in different applications of NoSQL databases, Spark, Data Pipelines, and map-reduce. These projects include university projects and projects implemented due to interest in BigData.
Map reduce / Spark / Dataframes queries for natural disaster dataset.
A collection of small projects exploring PySpark features and functionality including packages and modules, algorithms, and general data science techniques.
This Repo contains analysis of large data using Spark
UMSI-Bosch Manufacturing Line Failure Analysis
This series explores the basics of Apache Spark with the application of some practical elements of Spark, PySpark & SparkSQL
This repo contains my learnings and practices Zepplin notebooks on Spark using Scala. All the notebooks in the repo can be used as template code for most of the ML algorithms and can be built upon it for more complex problems.