1. 程式人生 > 其它 >Pytorch初始化模型引數

Pytorch初始化模型引數

#高斯分佈
torch.nn.init.normal_(tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0) → torch.Tensor
#均勻分佈
torch.nn.init.uniform_(tensor: torch.Tensor, a: float = 0.0, b: float = 1.0) → torch.Tensor
#常數分佈
torch.nn.init.constant_(tensor: torch.Tensor, val: float) → torch.Tensor
#全0分佈
torch.nn.init.zeros_(tensor: torch.Tensor) → torch.Tensor
#全1分佈
torch.nn.init.ones_(tensor: torch.Tensor) → torch.Tensor

具體程式碼

self.encoder_att = nn.Linear(encoder_dim, attention_dim)  # linear layer to transform encoded image
        self.decoder_att = nn.Linear(decoder_dim, attention_dim)  # linear layer to transform decoder's output
        self.full_att = nn.Linear(attention_dim, 1)  # linear layer to calculate values to be softmax-ed
        
        torch.nn.init.zeros_(self.encoder_att.weight)
        torch.nn.init.zeros_(self.encoder_att.bias)
        
        torch.nn.init.zeros_(self.decoder_att.weight)
        torch.nn.init.zeros_(self.decoder_att.bias)
        # for m in self.modules():
        torch.nn.init.zeros_(self.full_att.weight)
        torch.nn.init.zeros_(self.full_att.bias)
        
        for param in self.parameters():
            param.requires_grad = False
        
        self.relu = nn.ReLU()
        self.softmax = nn.Softmax(dim=1)  # softmax layer to calculate weights

  

初始化分為 weight 和 bias 的初始化,要分開