jdwittenauer / ipython-notebooks

A collection of IPython notebooks covering various topics.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Linear regression Q1

Umartahir93 opened this issue · comments

def gradientDescent(X, y, theta, alpha, iters):
temp = np.matrix(np.zeros(theta.shape))
parameters = int(theta.ravel().shape[1])
cost = np.zeros(iters)

//why are we using below loop? Its a matrix multiplication I dont think we need to
//loop here. It will always give the same answer. Can you please tell me what is the benefit of using loop here?
for i in range(iters):
error = (X * theta.T) - y

I understand now that is used for theta convergence oops my bad. It was easy.