torch.nn.utils.clip_grad_norm_()
阿新 • • 發佈:2021-12-11
torch.nn.utils.clip_grad_norm_();梯度裁剪;梯度截斷
用法
引數列表
- parameters 一個由張量或單個張量組成的可迭代物件(模型引數)
- max_norm 梯度的最大範數
- nort_type 所使用的範數型別。預設為L2範數,可以是無窮大範數inf
設parameters裡所有引數的梯度的範數為total_norm,
若max_norm>total_norm,parameters裡面的引數的梯度不做改變;
若max_norm<total_norm,parameters裡面的引數的梯度都要乘以一個係數clip_coef
官方程式碼
def clip_grad_norm_(parameters, max_norm, norm_type=2): r"""Clips gradient norm of an iterable of parameters. The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place. Arguments: parameters (Iterable[Tensor] or Tensor): an iterable of Tensors or a single Tensor that will have gradients normalized max_norm (float or int): max norm of the gradients norm_type (float or int): type of the used p-norm. Can be ``'inf'`` for infinity norm. Returns: Total norm of the parameters (viewed as a single vector). """ if isinstance(parameters, torch.Tensor): parameters = [parameters] #第一步 parameters = list(filter(lambda p: p.grad is not None, parameters)) max_norm = float(max_norm) norm_type = float(norm_type) if norm_type == inf: total_norm = max(p.grad.data.abs().max() for p in parameters) else: total_norm = 0 for p in parameters: #第二步 param_norm = p.grad.data.norm(norm_type) #第三步 total_norm += param_norm.item() ** norm_type total_norm = total_norm ** (1. / norm_type) clip_coef = max_norm / (total_norm + 1e-6) if clip_coef < 1: for p in parameters: p.grad.data.mul_(clip_coef) return total_norm
意義
這個函式的主要目的是對parameters裡的所有引數的梯度進行規範化
梯度裁剪解決的是梯度消失或爆炸的問題,即設定閾值,如果梯度超過閾值,那麼就截斷,將梯度變為閾值