1. 程式人生 > 其它 >元運算元卷積層實現

元運算元卷積層實現

技術標籤:GPU視訊推理智慧推理

元運算元卷積層實現
元運算元是jittor的關鍵概念,元運算元的層次結構如下所示。
元運算元由重索引運算元,重索引化簡運算元和元素級運算元組成。重索引運算元,重索引化簡運算元都是一元運算元。 重索引運算元是其輸入和輸出之間的一對多對映。重索引簡化運算元是多對一對映。廣播,填補, 切分運算元是常見的重新索引運算元。 而化簡,累乘,累加運算元是常見的索引化簡運算元。元素級運算元是元運算元的第三部分,與前兩個相比,元素算級子可能包含多個輸入。元素級運算元的所有輸入和輸出形狀必須相同,它們是一對一對映的。 例如,兩個變數的加法是一個二進位制的逐元素運算元。
在這裡插入圖片描述

元運算元的層級結構。元運算元包含三類運算元,重索引運算元,重索引化簡運算元,元素級運算元。元運算元的反向傳播運算元還是元運算元。元運算元可以組成常用的深度學習運算元。而這些深度學習運算元又可以進一步組成深度學習模型。

上面演示瞭如何通過三個元運算元實現矩陣乘法:
def matmul(a, b):
(n, m), k = a.shape, b.shape[-1]
a = a.broadcast([n,m,k], dims=[2])
b = b.broadcast([n,m,k], dims=[0])
return (a*b).sum(dim=1)
本文將展示如何使用元運算元實現卷積。
首先,實現一個樸素的Python卷積:
import numpy as np
import os
def conv_naive(x, w):
N,H,W,C = x.shape

Kh, Kw, _C, Kc = w.shape
assert C==_C, (x.shape, w.shape)
y = np.zeros([N,H-Kh+1,W-Kw+1,Kc])
for i0 in range(N):
    for i1 in range(H-Kh+1): 
        for i2 in range(W-Kw+1):
            for i3 in range(Kh):
                for i4 in range(Kw):
                    for i5 in range(C):
                        for i6 in range(Kc):
                            if i1-i3<0 or i2-i4<0 or i1-i3>=H or i2-i4>=W: continue
                            y[i0, i1, i2, i6] += x[i0, i1 + i3, i2 + i4, i5] * w[i3,i4,i5,i6]
return y

下載一個貓的影象,並使用conv_naive實現一個簡單的水平濾波器。

%matplotlib inline

import pylab as pl
img_path="/tmp/cat.jpg"
if not os.path.isfile(img_path):
!wget -O - ‘https://upload.wikimedia.org/wikipedia/commons/thumb/4/4f/Felis_silvestris_catus_lying_on_rice_straw.jpg/220px-Felis_silvestris_catus_lying_on_rice_straw.jpg’ > $img_path

img = pl.imread(img_path)
pl.subplot(121)
pl.imshow(img)
kernel = np.array([
[-1, -1, -1],
[0, 0, 0],
[1, 1, 1],
])
pl.subplot(122)
x = img[np.newaxis,:,:,:1].astype(“float32”)
w = kernel[:,:,np.newaxis,np.newaxis].astype(“float32”)
y = conv_naive(x, w)
print (x.shape, y.shape) # shape exists confusion
pl.imshow(y[0,:,:,0])
naive_conv運作良好。用jittor替換樸素實現。
import jittor as jt

def conv(x, w):
N,H,W,C = x.shape
Kh, Kw, _C, Kc = w.shape
assert C==_C
xx = x.reindex([N,H-Kh+1,W-Kw+1,Kh,Kw,C,Kc], [
‘i0’, # Nid
‘i1+i3’, # Hid+Khid
‘i2+i4’, # Wid+KWid
‘i5’, # Cid|
])
ww = w.broadcast_var(xx)
yy = xx*ww
y = yy.sum([3,4,5]) # Kh, Kw, c
return y

Let’s disable tuner. This will cause jittor not to use mkl for convolution

jt.flags.enable_tuner = 0

jx = jt.array(x)
jw = jt.array(w)
jy = conv(jx, jw).fetch_sync()
print (jx.shape, jy.shape)
pl.imshow(jy[0,:,:,0])
結果看起來一樣。效能如何?
%time y = conv_naive(x, w)
%time jy = conv(jx, jw).fetch_sync()
可以看出jittor的實現要快得多。為什麼這兩個實現在數學上等效,而jittor的實現執行速度更快?將逐步進行解釋:
首先,看一下jt.reindex的幫助文件。
help(jt.reindex)
可以擴充套件重索引操作,以便更好地理解:
xx = x.reindex([N,H-Kh+1,W-Kw+1,Kh,Kw,C,Kc], [
‘i0’, # Nid
‘i1+i3’, # Hid+Khid
‘i2+i4’, # Wid+KWid
‘i5’, # Cid
])
ww = w.broadcast_var(xx)
yy = xx*ww
y = yy.sum([3,4,5]) # Kh, Kw, c
擴充套件後:
shape = [N,H+Kh-1,W+Kw-1,Kh,Kw,C,Kc]

expansion of x.reindex

xx = np.zeros(shape, x.dtype)
for i0 in range(shape[0]):
for i1 in range(shape[1]):
for i2 in range(shape[2]):
for i3 in range(shape[3]):
for i4 in range(shape[4]):
for i5 in range(shape[5]):
for i6 in range(shape[6]):
if is_overflow(i0,i1,i2,i3,i4,i5,i6):
xx[i0,i1,…,in] = 0
else:
xx[i0,i1,i2,i3,i4,i5,i6] = x[i0,i1+i3,i2+i4,i5]

expansion of w.broadcast_var(xx)

ww = np.zeros(shape, x.dtype)
for i0 in range(shape[0]):
for i1 in range(shape[1]):
for i2 in range(shape[2]):
for i3 in range(shape[3]):
for i4 in range(shape[4]):
for i5 in range(shape[5]):
for i6 in range(shape[6]):
ww[i0,i1,i2,i3,i4,i5,i6] = w[i3,i4,i5,i6]

expansion of xx*ww

yy = np.zeros(shape, x.dtype)
for i0 in range(shape[0]):
for i1 in range(shape[1]):
for i2 in range(shape[2]):
for i3 in range(shape[3]):
for i4 in range(shape[4]):
for i5 in range(shape[5]):
for i6 in range(shape[6]):
yy[i0,i1,i2,i3,i4,i5,i6] = xx[i0,i1,i2,i3,i4,i5,i6] * ww[i0,i1,i2,i3,i4,i5,i6]

expansion of yy.sum([3,4,5])

shape2 = [N,H-Kh+1,W-Kw+1,Kc]
y = np.zeros(shape2, x.dtype)
for i0 in range(shape[0]):
for i1 in range(shape[1]):
for i2 in range(shape[2]):
for i3 in range(shape[3]):
for i4 in range(shape[4]):
for i5 in range(shape[5]):
for i6 in range(shape[6]):
y[i0,i1,i2,i6] += yy[i0,i1,i2,i3,i4,i5,i6]
迴圈融合後:
shape2 = [N,H-Kh+1,W-Kw+1,Kc]
y = np.zeros(shape2, x.dtype)
for i0 in range(shape[0]):
for i1 in range(shape[1]):
for i2 in range(shape[2]):
for i3 in range(shape[3]):
for i4 in range(shape[4]):
for i5 in range(shape[5]):
for i6 in range(shape[6]):
if not is_overflow(i0,i1,i2,i3,i4,i5,i6):
y[i0,i1,i2,i6] += x[i0,i1+i3,i2+i4,i5] * w[i3,i4,i5,i6]
這是就元運算元的優化技巧,它可以將多個運算元融合為一個複雜的融合運算元,包括許多卷積的變化(例如group conv,separate conv等)。
jittor會嘗試將融合運算元優化得儘可能快。嘗試一些優化(將形狀作為常量編譯到核心中),並編譯到底層的c++核心程式碼中。
jt.flags.compile_options={“compile_shapes”:1}
with jt.profile_scope() as report:
jy = conv(jx, jw).fetch_sync()
jt.flags.compile_options={}

print(f"Time: {float(report[1][4])/1e6}ms")

with open(report[1][1], ‘r’) as f:
print(f.read())
比之前的實現還要更快! 從輸出中,可以看一看func0的函式定義,這是卷積核心的主要程式碼,該核心程式碼是即時生成的。因為編譯器知道核心的形狀,所以使用了更多的優化方法。
Jittor簡單演示了元運算元的使用,並不是真正的效能測試,所以使用了比較小的資料規模進行測試,如果需要效能測試,開啟jt.flags.enable_tuner = 1,會啟動使用專門的硬體庫加速。