1. 程式人生 > 實用技巧 >閔可夫斯基引擎Minkowski Engine

閔可夫斯基引擎Minkowski Engine

閔可夫斯基引擎Minkowski Engine

Minkowski引擎是一個用於稀疏張量的自動微分庫。它支援所有標準神經網路層,例如對稀疏張量的卷積,池化,解池和廣播操作。有關更多資訊,請訪問文件頁面

pip install git+https://github.com/NVIDIA/MinkowskiEngine.git

稀疏張量網路:空間稀疏張量的神經網路

壓縮神經網路以加快推理速度並最小化記憶體佔用已被廣泛研究。用於模型壓縮的流行技術之一是修剪卷積網路中的權重,也被稱為稀疏卷積網路。用於模型壓縮的這種引數空間稀疏性壓縮在密集張量上執行的網路,並且這些網路的所有中間啟用也是密集張量。

但是,在這項工作中,專注於

稀疏的資料,尤其是空間稀疏的高維輸入。還可以將這些資料表示為稀疏張量,並且這些稀疏張量在3D感知,配準和統計資料等高維問題中很常見。將專門用於這些輸入的神經網路定義為稀疏張量網路,這些稀疏張量網路處理並生成稀疏張量作為輸出。為了構建稀疏張量網路,建立了所有標準的神經網路層,例如MLP,非線性,卷積,規範化,池化操作,就像在密集張量上定義,並在Minkowski引擎中實現的方法一樣。

在下面的稀疏張量卷積上可視化了一個稀疏張量網路操作。稀疏張量上的卷積層與密集張量上的卷積層相似。但是,在稀疏張量上,在一些指定點上計算卷積輸出,這些點可以在廣義卷積中進行控制。

特徵

  • 無限的高維稀疏張量支援
  • 所有標準神經網路層(卷積,池化,廣播等)
  • 動態計算圖
  • 自定義核心形狀
  • 多GPU訓練
  • 多執行緒核心對映
  • 多執行緒編譯
  • 高度優化的GPU核心

Requirements

  • Ubuntu >= 14.04
  • 11.1 > CUDA >= 10.1.243
  • pytorch >= 1.5
  • python >= 3.6
  • GCC >= 7

Pip

MinkowskiEngine是通過PyPI MinkowskiEngine分發的,可以使用簡單安裝pip。按照說明安裝pytorch。接下來,安裝openblas

sudo apt install libopenblas-dev
pip install torch
pip install -U MinkowskiEngine --install-option="--blas=openblas" -v
# For pip installation from the latest source
# pip install -U git+https://github.com/NVIDIA/MinkowskiEngine

If you want to specify arguments for the setup script, please refer to the following command.

# Uncomment some options if things don't work
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine \
# \ # uncomment the following line if you want to force cuda installation
# --install-option="--force_cuda" \
# \ # uncomment the following line if you want to force no cuda installation. force_cuda supercedes cpu_only
# --install-option="--cpu_only" \
# \ # uncomment the following line when torch fails to find cuda_home.
# --install-option="--cuda_home=/usr/local/cuda" \
# \ # uncomment the following line to override to openblas, atlas, mkl, blas
# --install-option="--blas=openblas" \

快速啟動

要使用Minkowski引擎,首先需要匯入引擎。然後,將需要定義網路。如果沒有量化資料,則需要將(空間)資料體素化或量化為稀疏張量。幸運的是,Minkowski引擎提供了量化功能(MinkowskiEngine.utils.sparse_quantize)。

Anaconda

We recommend python>=3.6 for installation. First, follow the anaconda documentation to install anaconda on your computer.

sudo apt install libopenblas-dev
conda create -n py3-mink python=3.8
conda activate py3-mink
conda install numpy mkl-include pytorch cudatoolkit=11.0 -c pytorch
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine

System Python

Like the anaconda installation, make sure that you install pytorch with the same CUDA version that nvcc uses.

# install system requirements
sudo apt install python3-dev libopenblas-dev
# Skip if you already have pip installed on your python3
curl https://bootstrap.pypa.io/get-pip.py | python3
# Get pip and install python requirements
python3 -m pip install torch numpy
git clone https://github.com/NVIDIA/MinkowskiEngine.git
cd MinkowskiEngine
python setup.py install
# To specify blas, CUDA_HOME and force CUDA installation, use the following command
# python setup.py install --blas=openblas --cuda_home=/usr/local/cuda --force_cuda

Creating a Network

import torch.nn as nn
import MinkowskiEngine as ME

class ExampleNetwork(ME.MinkowskiNetwork):

 def __init__(self, in_feat, out_feat, D):
 super(ExampleNetwork, self).__init__(D)
 self.conv1 = nn.Sequential(
 ME.MinkowskiConvolution(
 in_channels=in_feat,
 out_channels=64,
 kernel_size=3,
 stride=2,
 dilation=1,
 has_bias=False,
 dimension=D),
 ME.MinkowskiBatchNorm(64),
 ME.MinkowskiReLU())
 self.conv2 = nn.Sequential(
 ME.MinkowskiConvolution(
 in_channels=64,
 out_channels=128,
 kernel_size=3,
 stride=2,
 dimension=D),
 ME.MinkowskiBatchNorm(128),
 ME.MinkowskiReLU())
 self.pooling = ME.MinkowskiGlobalPooling()
 self.linear = ME.MinkowskiLinear(128, out_feat)

 def forward(self, x):
 out = self.conv1(x)
 out = self.conv2(out)
 out = self.pooling(out)
 return self.linear(out)

Forward and backward using the custom network

 # loss and network
 criterion = nn.CrossEntropyLoss()
 net = ExampleNetwork(in_feat=3, out_feat=5, D=2)
 print(net)

 # a data loader must return a tuple of coords, features, and labels.
 coords, feat, label = data_loader()
 input = ME.SparseTensor(feat, coords=coords)
 # Forward
 output = net(input)

 # Loss
 loss = criterion(output.F, label)