Quantcast
Channel: Data Science, Analytics and Big Data discussions - Latest topics
Viewing all articles
Browse latest Browse all 4448

Gradient descent confusion

$
0
0

@harshitmohan wrote:

Hi friends,

When I do a gradient descent implementation, I see that it converges (what I think!) for a particular value of alpha and #of iterations.

However, keeping alpha same, if I increase iterations, I see small increase in cost function at high iterations. If I decrease alpha now, it again looks like convergence but when I increase #of iterations, cost function again seem to increase a bit.

Following is for alpha=0.07 and iterations=10000
iter1

Following is for alpha=0.07 and iterations=30000
iter2

Is this normal or something is wrong in my implementation? I am also keeping regularization penalty same in both cases.

Posts: 1

Participants: 1

Read full topic


Viewing all articles
Browse latest Browse all 4448

Trending Articles