torch程式碼解析 為什麼要使用optimizer.zero_grad()
optimizer.zero_grad()意思是把梯度置零,也就是把loss關於weight的導數變成0.
在學習pytorch的時候注意到,對於每個batch大都執行了這樣的操作:
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
對於這些操作我是把它理解成一種梯度下降法,貼一個自己之前手寫的簡單梯度下降法作為對照:
# gradient descent
weights = [0] * n
alpha = 0.0001
max_Iter = 50000
for i in range(max_Iter):
loss = 0
d_weights = [0] * n
for k in range(m):
h = dot(input[k], weights)
d_weights = [d_weights[j] + (label[k] - h) * input[k][j] for j in range(n)]
loss += (label[k] - h) * (label[k] - h) / 2
d_weights = [d_weights[k]/m for k in range(n)]
weights = [weights[k] + alpha * d_weights[k] for k in range(n)]
if i%10000 == 0:
print "Iteration %d loss: %f"%(i, loss/m)
print weights
可以發現它們實際上是一一對應的:
optimizer.zero_grad()對應d_weights = [0] * n
即將梯度初始化為零(因為一個batch的loss關於weight的導數是所有sample的loss關於weight的導數的累加和)
outputs = net(inputs)對應h = dot(input[k], weights)
即前向傳播求出預測的值
loss = criterion(outputs, labels)對應loss += (label[k] - h) * (label[k] - h) / 2
這一步很明顯,就是求loss(其實我覺得這一步不用也可以,反向傳播時用不到loss值,只是為了讓我們知道當前的loss是多少)
loss.backward()對應d_weights = [d_weights[j] + (label[k] - h) * input[k][j] for j in range(n)]
即反向傳播求梯度
optimizer.step()對應weights = [weights[k] + alpha * d_weights[k] for k in range(n)]
即更新所有引數
如有不對,敬請指出。歡迎交流
---------------------
作者:scut_salmon
來源:CSDN
原文:https://blog.csdn.net/scut_salmon/article/details/82414730
版權宣告:本文為博主原創文章,轉載請附上博文連結!
之二:
有兩種方式直接把模型的引數梯度設成0:
model.zero_grad()
optimizer.zero_grad() # 當optimizer=optim.Optimizer(model.parameters())時,兩者等效
如果想要把某一Variable的梯度置為0,只需用以下語句:
Variable.grad.data.zero_()
# Zero the gradients before running the backward pass.
model.zero_grad()
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights
# of the model)
optimizer.zero_grad()