ayoub-berdeddouch / dataEng-Dphi-DataTalks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

dataEng-Dphi-DataTalks


Homeworks Progress:

Week Module Session Progress Dead line Link
01 Introduction & Prerequisites ⌛ ✔️ 15/07/2022 Intro
02 Data ingestion ⌛ ✔️ 15/07/2022 Data ingestion
03 Data Warehouse ⌛ ✔️ 15/07/2022 Data Warehouse
04 Analytics engineering ⌛ ✔️ 15/07/2022 Analytics engineering
05 Batch processing ⌛ ✔️ 15/07/2022 Batch processing
06 Streaming ⌛ ✔️ 15/07/2022 Streaming
07 Project ⏳ ✖️ 21/07/2022 Project
08 Project ⏳ ✖️ 21/07/2022 Project
09 Project ⏳ ✖️ 21/07/2022 Project

Hint


⌛ ✔️= Done || ⏳ ✖️= Not_DONE

Made with 💟 by Ayoub Berdeddouch


About the Course

Overview

Welcome to Data Engineering Bootcamp :)

We are doing this bootcamp with the support of DataTalks.Club. The content is created by some of the renowned data leaders. Many thanks to DataTalks.Club and the instructors for creating and allowing us to put this bootcamp together. Check them out

In this bootcamp, there will be:

  • 6 Weeks of Learning Content - release every Friday (detailed schedule here)
  • Home works for practice
  • 1 Graded Hands-on Assignment

Self-paced mode

  • All the materials of the course are freely available, so you can take the course at your own pace
  • Follow the suggested syllabus (see below) week by week

Learning Modules:

  • Learning modules will be released every Friday, starting from 13th May, 7:00 PM CET/ 10:30 PM IST.
  • Please keep checking this space for regular learning module additions we make.
  • To accommodate learners from across the globe, we are putting the learning modules in offline format instead of having live sessions. This will allow you to learn at your own time.
  • If you face any issues while learning, feel free to drop a message on #help channel on Discord.

Assignment Guidelines

  • There will be 1 Final Assignment, mandatory to attempt for the bootcamp completion.
  • Howeworks are for practice and we recommend you to work on them. However, they don't carry weightage while issuing certificate.
  • Certificate: In order to be eligible for the certificate, you should submit the assignment with a total score of minimum 60%.

Happy Learning :)

Syllabus

  • Big Picture
    • Introduction to all instructors
    • What to expect in this course
    • Architecture / Data Flow
    • What do we want to build
  • [GCP]
    • Intro to GCP - Concepts: IAM, Cloud Storage, BigQuery (relevant components)
    • What is GCP, why we need it
  • Docker
    • What is Docker
    • Running Postgres locally with Docker
    • Putting some data for testing to local postres with Python
    • Packaging this script in docker
    • Running postgres and the script in one network
    • Docker compose and running pgadmin and postres together with docker-compose
  • Data and SQL
    • Dataset: Taxi Rides NY dataset
    • Experimentation: Taking a first look at the data
    • Relevant SQL Queries (Refresher): group by, joins, window function, union
  • Terraform
    • Intro to Terraform - Concepts
    • Setting up GCP with TF: Storage, BigQuery

Goal: Orchestrating a job to ingest web data to a Data Lake in its raw form.

Instructor: Sejal & Alexey

  • Data Lake (GCS)

    • Basics, What is a Data Lake
    • ELT vs. ETL
    • Alternatives to components (S3/HDFS, Redshift, Snowflake etc.)
  • Orchestration (Airflow)

    • Basics
      • What is an Orchestration Pipeline?
      • What is a DAG?
  • Demo:

    • Setup:
      • Docker pre-reqs (refresher)
      • Airflow env with Docker
    • Data ingestion DAG - Demo (30 mins):
      • Extraction: Download and unpack the data
      • Pre-processing: Convert this raw data to parquet, partition (raw/yy/mm/dd)
      • Load: Raw data to GCS
      • Exploration: External Table for BigQuery -- Taking a look at the data
      • Further Enhancements: Transfer Service (AWS -> GCP)

Goal: Structuring data into a Data Warehouse

Instructor: Ankush

  • Data warehouse
    • What is a data warehouse solution
    • What is big query, why is it so fast, Cost of BQ, (5 min)
    • Partitoning and clustering, Automatic re-clustering (10 min)
    • Pointing to a location in google storage (5 min)
    • Loading data to big query & PG (10 min) -- using Airflow operator?
    • BQ best practices
    • Misc: BQ Geo location, BQ ML
    • Alternatives (Snowflake/Redshift)

Goal: Transforming Data in DWH to Analytical Views

Instructor: Victoria

  • Basics
    • What is DBT?
    • ETL vs ELT
    • Data modeling
    • DBT fit of the tool in the tech stack
  • Usage (Combination of coding + theory) (1:30-1:45 mins)
    • Anatomy of a dbt model: written code vs compiled Sources
    • Materialisations: table, view, incremental, ephemeral
    • Seeds
    • Sources and ref
    • Jinja and Macros
    • Tests
    • Documentation
    • Packages
    • Deployment: local development vs production
    • DBT cloud: scheduler, sources and data catalog (Airflow)
  • Google data studio -> Dashboard
  • Extra knowledge:
    • DBT cli (local)

Goal:

Instructor: Alexey

  • Distributed processing (Spark) (40 + ? minutes)
    • What is Spark, spark cluster (5 mins)
    • Explaining potential of Spark (10 mins)
    • What is broadcast variables, partitioning, shuffle (10 mins)
    • Pre-joining data (10 mins)
    • use-case
    • What else is out there (Flink) (5 mins)
  • Extending Orchestration env (airflow) (30 minutes)
    • Big query on airflow (10 mins)
    • Spark on airflow (10 mins)

Goal:

Instructor: Ankush

  • Basics
    • What is Kafka
    • Internals of Kafka, broker
    • Partitoning of Kafka topic
    • Replication of Kafka topic
  • Consumer-producer
  • Schemas (avro)
  • Streaming
    • Kafka streams
  • Kafka connect
  • Alternatives (PubSub/Pulsar)
  • Putting everything we learned to practice

  • Upcoming buzzwords

    • Delta Lake/Lakehouse
      • Databricks
      • Apache iceberg
      • Apache hudi
    • Data mesh
    • KSQLDB
    • Streaming analytics
    • Mlops


Overview

Architecture diagram

Technologies

  • Google Cloud Platform (GCP): Cloud-based auto-scaling platform by Google
    • Google Cloud Storage (GCS): Data Lake
    • BigQuery: Data Warehouse
  • Terraform: Infrastructure-as-Code (IaC)
  • Docker: Containerization
  • SQL: Data Analysis & Exploration
  • Airflow: Pipeline Orchestration
  • DBT: Data Transformation
  • Spark: Distributed Processing
  • Kafka: Streaming

Prerequisites

To get most out of this course, you should feel comfortable with coding and command line, and know the basics of SQL. Prior experience with Python will be helpful, but you can pick Python relatively fast if you have experience with other programming languages.

Prior experience with data engineering is not required.


Instructors

About


Languages

Language:Python 100.0%