1. 程式人生 > 實用技巧 >95%的人都不知道 MySQL還有索引管理與執行計劃

95%的人都不知道 MySQL還有索引管理與執行計劃

更多python教程請到: 菜鳥教程www.piaodoo.com

人人影視www.sfkyty.com

16影視www.591319.com

星辰影院www.591319.com


pytorch 動態網路+權值共享

pytorch以動態圖著稱,下面以一個栗子來實現動態網路和權值共享技術:

# -*- coding: utf-8 -*-
import random
import torch

class DynamicNet(torch.nn.Module):
def init(self, D_in, H, D_out):
"""
這裡構造了幾個向前傳播過程中用到的線性函式
"""
super(DynamicNet, self).init

()
self.input_linear = torch.nn.Linear(D_in, H)
self.middle_linear = torch.nn.Linear(H, H)
self.output_linear = torch.nn.Linear(H, D_out)

def forward(self, x):
"""
For the forward pass of the model, we randomly choose either 0, 1, 2, or 3
and reuse the middle_linear Module that many times to compute hidden layer
representations.

Since each forward pass builds a dynamic computation graph, we can use normal
Python control-flow operators like loops or conditional statements when
defining the forward pass of the model.

Here we also see that it is perfectly safe to reuse the same Module many
times when defining a computational graph. This is a big improvement from Lua
Torch, where each Module could be used only once.
這裡中間層每次向前過程中都是隨機新增0-3層,而且中間層都是使用的同一個線性層,這樣計算時,權值也是用的同一個。
"""
h_relu = self.input_linear(x).clamp(min=0)
for _ in range(random.randint(0, 3)):
  h_relu = self.middle_linear(h_relu).clamp(min=0)
y_pred = self.output_linear(h_relu)
return y_pred


# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

# Construct our model by instantiating the class defined above
model = DynamicNet(D_in, H, D_out)

# Construct our loss function and an Optimizer. Training this strange model with
# vanilla stochastic gradient descent is tough, so we use momentum
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)
for t in range(500):
  # Forward pass: Compute predicted y by passing x to the model
  y_pred = model(x)

  # Compute and print loss
  loss = criterion(y_pred, y)
  print(t, loss.item())

  # Zero gradients, perform a backward pass, and update the weights.
  optimizer.zero_grad()
  loss.backward()
  optimizer.step()

這個程式實際上是一種RNN結構,在執行過程中動態的構建計算圖

References: Pytorch Documentations.

以上這篇pytorch動態網路以及權重共享例項就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支援菜鳥教程www.piaodoo.com。