Pytorch實戰學習(一):用Pytorch實現線性迴歸
阿新 • • 發佈:2021-07-31
《PyTorch深度學習實踐》完結合集_嗶哩嗶哩_bilibili
P5--用Pytorch實現線性迴歸
建立模型四大步驟
一、Prepare dataset
mini-batch:x、y必須是矩陣
## Prepare Dataset:mini-batch, X、Y是3X1的Tensor x_data = torch.Tensor([[1.0], [2.0], [3.0]]) y_data = torch.Tensor([[2.0], [4.0], [6.0]])
二、Design model
1、重點是構造計算圖
##Design Model ##構造類,繼承torch.nn.Module類class LinearModel(torch.nn.Module): ## 建構函式,初始化物件 def __init__(self): ##super呼叫父類 super(LinearModel, self).__init__() ##構造物件,Linear Unite,包含兩個Tensor:weight和bias,引數(1, 1)是w的維度 self.linear = torch.nn.Linear(1, 1) ## 建構函式,前饋運算 def forward(self, x):## w*x+b y_pred = self.linear(x) return y_pred model = LinearModel()
2、設定w的維度,後一層的神經元數量 X 前一層神經元數量
三、Construct Loss and Optimizer
##Construct Loss and Optimizer ##損失函式,傳入y和y_presd criterion = torch.nn.MSELoss(size_average = False) ##優化器,model.parameters()找出模型所有的引數,Lr--學習率optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
1、損失函式
2、優化器
可用不同的優化器進行測試對比
四、Training cycle
## Training cycle for epoch in range(100): y_pred = model(x_data) loss = criterion(y_pred, y_data) print(epoch, loss) ##梯度歸零 optimizer.zero_grad() ##反向傳播 loss.backward() ##更新 optimizer.step()
完整程式碼
import torch ## Prepare Dataset:mini-batch, X、Y是3X1的Tensor x_data = torch.Tensor([[1.0], [2.0], [3.0]]) y_data = torch.Tensor([[2.0], [4.0], [6.0]]) ##Design Model ##構造類,繼承torch.nn.Module類 class LinearModel(torch.nn.Module): ## 建構函式,初始化物件 def __init__(self): ##super呼叫父類 super(LinearModel, self).__init__() ##構造物件,Linear Unite,包含兩個Tensor:weight和bias,引數(1, 1)是w的維度 self.linear = torch.nn.Linear(1, 1) ## 建構函式,前饋運算 def forward(self, x): ## w*x+b y_pred = self.linear(x) return y_pred model = LinearModel() ##Construct Loss and Optimizer ##損失函式,傳入y和y_presd criterion = torch.nn.MSELoss(size_average = False) ##優化器,model.parameters()找出模型所有的引數,Lr--學習率 optimizer = torch.optim.SGD(model.parameters(), lr=0.01) ## Training cycle for epoch in range(100): y_pred = model(x_data) loss = criterion(y_pred, y_data) print(epoch, loss) ##梯度歸零 optimizer.zero_grad() ##反向傳播 loss.backward() ##更新 optimizer.step() ## Outpue weigh and bias print('w = ', model.linear.weight.item()) print('b = ', model.linear.bias.item()) ## Test Model x_test = torch.Tensor([[4.0]]) y_test = model(x_test) print('y_pred = ', y_test.data)
執行結果
訓練100次後,得到的 weight and bias,還有預測的y