1. 程式人生 > 實用技巧 >3. 神經網路基礎:計算方法及其python實現

3. 神經網路基礎:計算方法及其python實現

程式碼練習

HybridSN 高光譜分類網路

class HybridSN(nn.Module):
  def __init__(self):
    super(HybridSN,self).__init__()
    # conv1:(1, 30, 25, 25), 8個 7x3x3 的卷積核 ==>(8, 24, 23, 23)
    self.conv3d1 = nn.Conv3d(1,8,kernel_size=(7,3,3),stride=1,padding=0)
    self.bn1 = nn.BatchNorm3d(8)
    #conv2:(8, 24, 23, 23), 16個 5x3x3 的卷積核 ==>(16, 20, 21, 21)
    self.conv3d2 = nn.Conv3d(8,16,kernel_size=(5,3,3),stride=1,padding=0)
    self.bn2 = nn.BatchNorm3d(16)
    #conv3:(16, 20, 21, 21),32個 3x3x3 的卷積核 ==>(32, 18, 19, 19)
    self.conv3d3 = nn.Conv3d(16,32,kernel_size=(3,3,3),stride=1,padding=0)
    self.bn3 = nn.BatchNorm3d(32)

    #二維卷積:(576, 19, 19) 64個 3x3 的卷積核,得到 (64, 17, 17)
    self.conv2d4 = nn.Conv2d(576,64,kernel_size=(3,3),stride=1,padding=0)
    self.bn4 = nn.BatchNorm2d(64)

    #flatten
    self.fc1 = nn.Linear(18496,256)
    #256,128節點的全連線層,都使用比例為0.4的 Dropout
    self.fc2 = nn.Linear(256,128)
    #最後輸出為 16 個節點,是最終的分類類別數
    self.fc3 = nn.Linear(128,16)
    self.dropout = nn.Dropout(0.4)


  def forward(self,x):
    #前向計算
    out = F.relu(self.bn1(self.conv3d1(x)))
    out = F.relu(self.bn2(self.conv3d2(out)))
    out = F.relu(self.bn3(self.conv3d3(out)))
    out = F.relu(self.bn4(self.conv2d4(out.reshape(out.shape[0],-1,19,19))))
    out = out.reshape(out.shape[0],-1)
    out = F.relu(self.dropout(self.fc1(out)))
    out = F.relu(self.dropout(self.fc2(out)))
    out = self.fc3(out)
    return out

SENet 實現

視訊學習

語義分割中的自注意力機制和低秩重重建


參考:語義分割中的Attention和低秩重建

影象語義分割前沿進展