mjpaden / deeplearning.ai

deeplearning.ai Course Materials

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

deeplearning.ai

Completed Course Projects for deeplearning.ai

Courses

  1. Neural Networks and Deep Learning - Review of basic neural network structure with NumPy
  2. Improving Deep Neural Networks: Hyperparameter Tuning, Regularization, and Optimization - Weight initialization, L2 regularization, dropout, testing backpropagation with gradient checking, optimizers, and hyperparameter selection, introduction to TensorFlow v1.x
  3. Structuring Machine Learning Projects (no coding assignments) - Determining performance metrics and goal values, training/validation/test set selection and splitting, determining most impactful next steps for improvement (e.g., data quality, quantity, model complexity)
  4. Convolutional Neural Networks - Introduction to Keras, TensorFlow data generators, emotion recognition with ResNet, car and general object recognition with YOLO, one-shot learning facial recognition with Inception, and generative adversarial models for style transfer
  5. Sequence Models - Recurrent neural networks (RNNs), gated recurrent units (GRUs), long short term memory models (LSTMs), attention, character-level models, generating linguistic and musical sequences, debiasing embeddings, neural machine translation, and trigger word detection
  1. NLP with Classification and Vector Spaces - Sentiment analysis with basic logistic regression and subsequently naive Bayes, word embeddings, and machine translation with locality-sensitive hashing
  2. NLP with Probabilistic Models - Autocorrect with minimum edit distance and Bayes' theorem, POS tagging with hidden Markov models and the Viterbi algorithm, autocomplete with n-gram models, and generating word embeddings with a continuous bag of words (CBoW)
  3. NLP with Sequence Models - Sentiment analysis with a neural network (with Google Brain's trax ML library), generating fake Shakespeare with GRUs, evaluating perplexity, named-entity recognition with LSTMs, and one-shot learning to recognize duplicate questions with Siamese networks
  4. NLP with Attention Models - Neural machine translation with RNNs, scaled dot-product attention, and Minimum Bayes Risk decoding; summarization with a transformer model and causal attention; question-answering with BERT and subsequently T5; and chatbot development with the efficient attention of the Reformer language model
  1. Introduction to Machine Learning in Production (ungraded labs only) - Deployment patterns, monitoring ML systems and data pipelines, prioritizing vectors of improvement, and handling label ambiguity.
  2. Data Lifecycle in Production - Collecting and managing data for use in production ML models. Fairness in collection; identifying and responding to data and concept change; TensorFlow tools (TFX & TFDV) for schema inference, data validation, anomaly detection, and feature engineering; feature selection algorithms; versioning datasets with ML metadata; improving performance with weak supervision, active learning, and data augmentation.
  3. Modeling Pipelines in Production (ungraded labs only) - Evaluating and optimizing production models. AutoML (hyperparameter tuning, neural architecture search), performance improvement techniques (dimensionality reduction, quantization, pruning, data & model parallelism, knowledge distillation), model analysis using TFMA (debugging, sensitivity analysis for mitigating adversarial attacks, residual analysis, model fairness and remediation), explainability and interpretability (partial dependence plots, Shapley values and SHAP, concept activation vectors, LIME, and AI Explanations).
  4. Deploying Machine Learning Models in Production (ungraded labs only) - Deployment patterns for ML models. Use cases and toolsets for edge deployment (TensorFlow Lite), containerized deployment and orchestration with Kubernetes, scaling infrastructure with KubeFlow, serving with TensorFlow Serving, caching strategies, experiment tracking, model versioning, CI/CD pipelines for ML using Istio, progressive delivery, model monitoring with TensorBoard, GDPR/CCPA compliance.

The general structure of these assignments involved completing portions of Python functions inside Jupyter Notebooks. I adapted some of the original non-assignment code to allow them to run in my local environment. Many of the MLops exercises were based on QwikLabs explorations using the Google Cloud Console. The courses also included written tests which are not provided here.

Some large model files and datasets required for re-running select notebooks are unfortunately not included in this repository due to file size limits, but I may be able to help locate them elsewhere if needed.