Autograd:自動微分
阿新 • • 發佈:2018-10-07
是個 形狀 func tensor requires comm 需要 data light
Autograd
1、深度學習的算法本質上是通過反向傳播求導數,Pytorch的Autograd模塊實現了此功能;在Tensor上的所有操作,Autograd都能為他們自動提供微分,避免手動計算導數的復雜過程。 2、autograd.Variable是Autograd中的核心類,它簡單的封裝了Tensor,並支持幾乎所有Tensor操作;Tensor被封裝為Variable之後,可以調用它的.backward()實現反向傳播,自動計算所有的梯度。 3、Variable主要包含三個屬性: data:保存Variable所包含的Tensor; grad:保存data對應的梯度,grad也是個Variable,而不是Tensor,它和data的形狀一樣; grad_fn:指向一個Function對象,這個Function用來反向傳播計算輸入的梯度。
具體代碼解析
- #_Author_:Monkey
- #!/usr/bin/env python
- #-*- coding:utf-8 -*-
- import torch as t
- from torch.autograd import Variable
- x = Variable(t.ones(2,2),requires_grad = True)
- print(x)
- ‘‘‘‘‘tensor([[1., 1.],
- [1., 1.]], requires_grad=True)‘‘‘
- y = x.sum()
- print(y)
- ‘‘‘‘‘tensor(4., grad_fn=<SumBackward0>)‘‘‘
- print(y.grad_fn) #指向一個Function對象,這個Function用來反向傳播計算輸入的梯度
- ‘‘‘‘‘<SumBackward0 object at 0x000002D4240AB860>‘‘‘
- y.backward()
- print(x.grad)
- ‘‘‘‘‘tensor([[1., 1.],
- [1., 1.]])‘‘‘
- y.backward()
- print(x.grad)
- ‘‘‘‘‘tensor([[2., 2.],
- [2., 2.]])‘‘‘
- y.backward()
- print( x.grad )
- ‘‘‘‘‘tensor([[3., 3.],
- [3., 3.]])‘‘‘
- ‘‘‘‘‘grad在反向傳播過程中時累加的(accumulated),這意味著運行
- 反向傳播,梯度都會累加之前的梯度,所以反向傳播之前需要梯度清零‘‘‘
- print( x.grad.data.zero_() )
- ‘‘‘‘‘tensor([[0., 0.],
- [0., 0.]])‘‘‘
- y.backward()
- print( x.grad )
- ‘‘‘‘‘tensor([[1., 1.],
- [1., 1.]])‘‘‘
- m = Variable(t.ones(4,5))
- n = t.cos(m)
- print(m)
- print(n)
- ‘‘‘‘‘tensor([[1., 1., 1., 1., 1.],
- [1., 1., 1., 1., 1.],
- [1., 1., 1., 1., 1.],
- [1., 1., 1., 1., 1.]])
- tensor([[0.5403, 0.5403, 0.5403, 0.5403, 0.5403],
- [0.5403, 0.5403, 0.5403, 0.5403, 0.5403],
- [0.5403, 0.5403, 0.5403, 0.5403, 0.5403],
- [0.5403, 0.5403, 0.5403, 0.5403, 0.5403]])‘‘‘
- m_tensor_cos = t.cos(m.data)
- print(m_tensor_cos)
- ‘‘‘‘‘ensor([[0.5403, 0.5403, 0.5403, 0.5403, 0.5403],
- [0.5403, 0.5403, 0.5403, 0.5403, 0.5403],
- [0.5403, 0.5403, 0.5403, 0.5403, 0.5403],
- [0.5403, 0.5403, 0.5403, 0.5403, 0.5403]])‘‘‘
Autograd:自動微分