aymericdamien / TensorFlow-Examples

TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

My weights are exploding!

kilarinikhil opened this issue · comments

I tried to implement the code by importing mnist data in a slightly different way.

import tensorflow as tf
import numpy as np

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data(path='mnist.npz')
x_train,x_test = tf.reshape(x_train,[len(x_train),784]),tf.reshape(x_test,[len(x_test),784])
y_train,y_test = tf.one_hot(y_train,10),tf.one_hot(y_test,10)

learning_rate = 0.001
training_epochs = 25
batch_size = 100
total_batch = int(60000/batch_size)
display_step = 1

#tf Graph Input

x = tf.placeholder(tf.float64,[None,784])
y = tf.placeholder(tf.float64,[None,10])

W = tf.Variable(np.random.randn(784,10))
b = tf.Variable(np.random.randn(1,10))

pred = tf.nn.softmax(tf.add(tf.matmul(x,W),b))

cost = tf.reduce_mean(-tf.reduce_sum(tf.multiply(y,tf.log(pred)),1))

optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    X_train,X_test = sess.run(x_train).astype('float64'),sess.run(x_test).astype('float64')
    Y_train,Y_test = sess.run(y_train).astype('float64'),sess.run(y_test).astype('float64')
    
    for epoch in range(training_epochs):
        avg_cost = 0
        for i in range(total_batch):
            batch_x,batch_y = X_train[i*batch_size:(i+1)*batch_size],Y_train[i*batch_size:(i+1)*batch_size]
            
            _,c = sess.run([optimizer,cost],feed_dict = {x:batch_x,y:batch_y})
            
            avg_cost += c/total_batch
        
       
    
    print("Optimization Finished!")

I found that whole code is fine except that i didnt normalize the weights.