Chao-Jiang / tensorflow_cookbook

Code from Tensorflow Cookbook

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

By Nick McClure


This chapter intends to introduce the main objects and concepts in Tensorflow. We also introduce how to access the data for the rest of the book and provide additional resources for learning about Tensorflow.

  1. General Outline of TF Algorithms
  • Here we introduce Tensorflow and the general outline of how most Tensorflow algorithms work.
  1. Creating and Using Tensors
  • How to create and initialize tensors in Tensorflow. We also depict how these operations appear in Tensorboard.
  1. Using Variables and Placeholders
  • How to create and use variables and placeholders in Tensorflow. We also depict how these operations appear in Tensorboard.
  1. Working with Matrices
  • Understanding how Tensorflow can work with matrices is crucial to understanding how the algorithms work.
  1. Declaring Operations
  • How to use various mathematical operations in Tensorflow.
  1. Implementing Activation Functions
  • Activation functions are unique functions that Tensorflow has built in for your use in algorithms.
  1. Working with Data Sources
  • Here we show how to access all the various required data sources in the book. There are also links describing the data sources and where they come from.
  1. Additional Resources
  • Mostly official resources and papers. The papers are Tensorflow papers or Deep Learning resources.

After we have established the basic objects and methods in Tensorflow, we now want to establish the components that make up Tensorflow algorithms. We start by introducing computational graphs, and then move to loss functions and back propagation. We end with creating a simple classifier and then show an example of evaluating regression and classification algorithms.

  1. One Operation as a Computational Graph
  • We show how to create an operation on a computational graph and how to visualize it using Tensorboard.
  1. Layering Nested Operations
  • We show how to create multiple operations on a computational graph and how to visualize them using Tensorboard.
  1. Working with Multiple Layers
  • Here we extend the usage of the computational graph to create multiple layers and show how they appear in Tensorboard.
  1. Implmenting Loss Functions
  • In order to train a model, we must be able to evaluate how well it is doing. This is given by loss functions. We plot various loss functions and talk about the benefits and limitations of some.
  1. Implmenting Back Propagation
  • Here we show how to use loss functions to iterate through data and back propagate errors for regression and classification.
  1. Working with Stochastic and Batch Training
  • Tensorflow makes it easy to use both batch and stochastic training. We show how to implement both and talk about the benefits and limitations of each.
  1. Combining Everything Together
  • We now combine everything together that we have learned and create a simple classifier.
  1. Evaluating Models
  • Any model is only as good as it's evaluation. Here we show two examples of (1) evaluating a regression algorithm and (2) a classification algorithm.

Here we show how to implement various linear regression techniques in Tensorflow. The first two sections show how to do standard matrix linear regression solving in Tensorflow. The remaining six sections depict how to implement various types of regression using computational graphs in Tensorflow.

  1. Using the Matrix Inverse Method
  • How to solve a 2D regression with a matrix inverse in Tensorflow.
  1. Implementing a Decomposition Method
  • Solving a 2D linear regression with Cholesky decomposition.
  1. Learning the Tensorflow Way of Linear Regression
  • Linear regression iterating through a computational graph with L2 Loss.
  1. Understanding Loss Functions in Linear Regression
  • L2 vs L1 loss in linear regression. We talk about the benefits and limitations of both.
  1. Implementing Deming Regression (Total Regression)
  • Deming (total) regression implmented in Tensorflow by changing the loss function.
  1. Implementing Lasso and Ridge Regression
  • Lasso and Ridge regression are ways of regularizing the coefficients. We implement both of these in Tensorflow via changing the loss functions.
  1. Implementing Elastic Net Regression
  • Elastic net is a regularization technique that combines the L2 and L1 loss for coefficients. We show how to implement this in Tensorflow.
  1. Implementing Logistic Regression
  • We implment logistic regression by the use of an activation function in our computational graph.

This chapter shows how to implement various SVM methods with Tensorflow. We first create a linear SVM and also show how it can be used for regression. We then introduce kernels (RBF Gaussian kernel) and show how to use it to split up non-linear data. We finish with a multi-dimensional implementation of non-linear SVMs to work with multiple classes.

  1. Introduction
  • We introduce the concept of SVMs and how we will go about implementing them in the Tensorflow framework.
  1. Working with Linear SVMs
  • We create a linear SVM to separate I. setosa based on sepal length and pedal width in the Iris data set.
  1. Reduction to Linear Regression
  • The heart of SVMs is separating classes with a line. We change tweek the algorithm slightly to perform SVM regression.
  1. Working with Kernels in Tensorflow
  • In order to extend SVMs into non-linear data, we explain and show how to implement different kernels in Tensorflow.
  1. Implmenting Non-Linear SVMs
  • We use the Gaussian kernel (RBF) to separate non-linear classes.
  1. Implementing Multi-class SVMs
  • SVMs are inherently binary predictors. We show how to extend them in a one-vs-all strategy in Tensorflow.

Nearest Neighbor methods are a very popular ML algorithm. We show how to implement k-Nearest Neighbors, weighted k-Nearest Neighbors, and k-Nearest Neighbors with mixed distance functions. In this chapter we also show how to use the Levenshtein distance (edit distance) in Tensorflow, and use it to calculate the distance between strings. We end this chapter with showing how to use k-Nearest Neighbors for categorical prediction with the MNIST handwritten digit recognition.

  1. Introduction
  • We introduce the concepts and methods needed for performing k-Nearest Neighbors in Tensorflow.
  1. Working with Nearest Neighbors
  • We create a nearest neighbor algorithm that tries to predict housing worth (regression).
  1. Working with Text Based Distances
  • In order to use a distance function on text, we show how to use edit distances in Tensorflow.
  1. Computing Mixing Distance Functions
  • Here we implement scaling of the distance function by the standard deviation of the input feature for k-Nearest Neighbors.
  1. Using Address Matching
  • We use a mixed distance function to match addresses. We use numerical distance for zip codes, and string edit distance for street names. The street names are allowed to have typos.
  1. Using Nearest Neighbors for Image Recognition
  • The MNIST digit image collection is a great data set for illustration of how to perform k-Nearest Neighbors for an image classification task.

Neural Networks are very important in machine learning and growing in popularity due to the major breakthroughs in prior unsolved problems. We must start with introducing 'shallow' neural networks, which are very powerful and can help us improve our prior ML algorithm results. We start by introducing the very basic NN unit, the operational gate. We gradually add more and more to the neural network and end with training a model to play tic-tac-toe.

  1. Introduction
  • We introduce the concept of neural networks and how Tensorflow is built to easily handle these algorithms.
  1. Implementing Operational Gates
  • We implement an operational gate with one operation. Then we show how to extend this to multiple nested operations.
  1. Working with Gates and Activation Functions
  • Now we have to introduce activation functions on the gates. We show how different activation functions operate.
  1. Implmenting a One Layer Neural Network
  • We have all the pieces to start implementing our first neural network. We do so here with regression on the Iris data set.
  1. Implementing Different Layers
  • This section introduces the convolution layer and the max-pool layer. We show how to chain these together in a 1D and 2D example with fully connected layers as well.
  1. Using Multi-layer Neural Networks
  • Here we show how to functionalize different layers and variables for a cleaner multi-layer neural network.
  1. Improving Predictions of Linear Models
  • We show how we can improve the convergence of our prior logistic regression with a set of hidden layers.
  1. Learning to Play Tic-Tac-Toe
  • Given a set of tic-tac-toe boards and corresponding optimal moves, we train a neural network classification model to play. At the end of the script, you can attempt to play against the trained model.
  1. Introduction
  • We introduce methods for turning text into numerical vectors. We introduce the Tensorflow 'embedding' feature as well.
  1. Working with Bag-of-Words
  • Here we use Tensorflow to do a one-hot-encoding of words called bag-of-words. We use this method and logistic regression to predict if a text message is spam or ham.
  1. Implementing TF-IDF
  • We implement Text Frequency - Inverse Document Frequency (TFIDF) with a combination of Sci-kit Learn and Tensorflow. We perform logistic regression on TFIDF vectors to improve on our spam/ham text-message predictions.
  1. Working with CBOW
  2. Working with Skip-Gram
  3. Implementing Word2Vec Example
  4. Performing Sentiment Analysis

Ch 8: Convolutional Neural Networks

TBA

Ch 9: Recurrent Neural Networks

TBA

Ch 10: Taking Tensorflow to Production

TBA

Ch 11: More with Tensorflow

TBA

About

Code from Tensorflow Cookbook

License:MIT License


Languages

Language:Python 100.0%