taylover2016 / cs231n_2021_sol

My solutions for CS231n assignments

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CS231n: Assignment Solutions

Convolutional Neural Networks for Visual Recognition

Stanford - Spring 2021

About

Overview

These are my solutions for the CS231n course assignemnts offered by Stanford University (Spring 2021). Inline questions are explained in detail, the code is brief and commented (see examples below). From what I investigated, these should be the shortest code solutions (excluding open-ended challenges). In assignment 2, DenseNet is used in PyTorch notebook and ResNet in TensorFlow notebook.

Note: currently, only solutions for the first 2 assignemnts are done.

Main sources (official)

Additional references (helper)

  • Additional references are yet to be added...

Solutions

Assignment 1

  • Q1: k-Nearest Neighbor classifier. (Done)
  • Q2: Training a Support Vector Machine. (Done)
  • Q3: Implement a Softmax classifier. (Done)
  • Q4: Two-Layer Neural Network. (Done)
  • Q5: Higher Level Representations: Image Features. (Done)

Assignment 2

  • Q1: Fully-connected Neural Network. (Done)
  • Q2: Batch Normalization. (Done)
  • Q3: Dropout. (Done)
  • Q4: Convolutional Networks. (Done)
  • Q5 option 1: PyTorch on CIFAR-10. (Done)
  • Q5 option 2: TensorFlow on CIFAR-10. (Done)

Assignment 3

  • Solutions for assignemnt 3 are yet to be added...

Examples

Inline question example
Inline Question 1

It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? How would change the margin affect of the frequency of this happening? Hint: the SVM loss function is not strictly speaking differentiable



Your Answer


First, we need to make some assumptions. To compute our SVM loss, we use Hinge loss which takes the form . For 1D case, we can define it as follows ( - score, - any class, - correct class, - margin):

Let's now see how our function fits the definition of computing the gradient. It is the formula we use for computing the gradient numerically when, instead of implementing the limit approaching to $0$, we choose some arbitrary small :

Now we can talk about the possible mismatches between numeric and analytic gradient computation:

  1. Cause of mismatch
    • Relative error - the discrepancy is caused due to arbitrary choice of small values of because by definition it should approach 0.Analytic computation produces an exact result (as precise as computation precision allows) while numeric solution only approximates the result.

    • Kinks - only has a subgradient because when both values in $\max$ are equal, its gradient is undefined, therefore, not smooth. Such parts, referred to as kinks, may cause numeric gradient to produce different results from analytic computation due to (again) arbitrary choice of .

  2. Concerns
    • When comparing analytic and numeric methods, kinks are more dangerous than small inaccuracies where the gradient is smooth. Small derivative inaccuracies still change the weight by approximately the same amount but kinks may cause unintentional updates as seen in an example below. If the unintentional values would have a noticable affect on parameter updates, it is a reason for concern.

  3. 1D example of numeric gradient fail
    • Assume . Then the analytic computation of the derivative of would yield 0. However, if we choose our , then the numeric computation would yield 0.9.

  4. Relation between margin and mismatch
    • Assuming all other parameters remain unchanged, increasing will lower the frequency of kinks. This is because higher will cause more to be positive, thus reducing the probability of kinks. In reality though, it would not have a big effect - if we increase the margin , the SVM will only learn to increase the (negative) gap between and 0 (when ). But that still means, if we add , there is the same chance for to result on the edge.


Python code example
def conv_forward_naive(x, w, b, conv_param):
    """A naive implementation of the forward pass for a convolutional layer.

    The input consists of N data points, each with C channels, height H and
    width W. We convolve each input with F different filters, where each filter
    spans all C channels and has height HH and width WW.

    Input:
    - x: Input data of shape (N, C, H, W)
    - w: Filter weights of shape (F, C, HH, WW)
    - b: Biases, of shape (F,)
    - conv_param: A dictionary with the following keys:
      - 'stride': The number of pixels between adjacent receptive fields in the
        horizontal and vertical directions.
      - 'pad': The number of pixels that will be used to zero-pad the input.

    During padding, 'pad' zeros should be placed symmetrically (i.e equally on both sides)
    along the height and width axes of the input. Be careful not to modfiy the original
    input x directly.

    Returns a tuple of:
    - out: Output data, of shape (N, F, H', W') where H' and W' are given by
      H' = 1 + (H + 2 * pad - HH) / stride
      W' = 1 + (W + 2 * pad - WW) / stride
    - cache: (x, w, b, conv_param)
    """
    out = None
    ###########################################################################
    # TODO: Implement the convolutional forward pass.                         #
    # Hint: you can use the function np.pad for padding.                      #
    ###########################################################################
    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

    P1 = P2 = P3 = P4 = conv_param['pad'] # padding: up = right = down = left
    S1 = S2 = conv_param['stride']        # stride:  up = down
    N, C, HI, WI = x.shape                # input dims  
    F, _, HF, WF = w.shape                # filter dims
    HO = 1 + (HI + P1 + P3 - HF) // S1    # output height      
    WO = 1 + (WI + P2 + P4 - WF) // S2    # output width

    # Helper function (warning: numpy version 1.20 or above is required for usage)
    to_fields = lambda x: np.lib.stride_tricks.sliding_window_view(x, (WF,HF,C,N))

    w_row = w.reshape(F, -1)                                            # weights as rows
    x_pad = np.pad(x, ((0,0), (0,0), (P1, P3), (P2, P4)), 'constant')   # padded inputs
    x_col = to_fields(x_pad.T).T[...,::S1,::S2].reshape(N, C*HF*WF, -1) # inputs as cols

    out = (w_row @ x_col).reshape(N, F, HO, WO) + np.expand_dims(b, axis=(2,1))
    
    x = x_pad # we will use padded version as well during backpropagation

    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################
    cache = (x, w, b, conv_param)
    return out, cache

About

My solutions for CS231n assignments


Languages

Language:Jupyter Notebook 94.2%Language:Python 5.5%Language:Cython 0.1%Language:Shell 0.1%