實驗記錄20181118
阿新 • • 發佈:2018-12-22
1,首先,DispNet去除了corr層,因為我還不確定corr層的原理是否正確,然後繼續採用左右影像cancat在一起,然後現在採用FlyingThings3d的資料集,來看看效果如何?
2,我覺得road detection的任務還是不能用分類的方法,分類的方法還是產生單個單個的結果,很容易產生不連續的道路預測結果。
3,在參考了幾個很重要的文章之後,自己模仿著寫出了新的一個網路,最重要的模組程式碼如下:
def layer(x,filter_out): ksize1x1 = 1 ksize3x3 = 3 ksize5x5 = 5 ksize7x7 = 7 stride = 1 filter_out_32=32 filter_out_1=1 shortcut = x with tf.variable_scope('block_A'): x1_conv_1x1=conv_2d(x, ksize1x1 ,stride,filter_out) x1_conv_3x3=conv_2d(x, ksize3x3 ,stride,filter_out) x1_conv_5x5=conv_2d(x, ksize5x5 ,stride,filter_out) x1_conv_7x7=conv_2d(x, ksize7x7 ,stride,filter_out) x2_conv_1x1=Max_pooling(x1_conv_1x1, pool_size=[3,3], stride=1, padding='SAME') x2_conv_3x3_1x1=conv_2d(x, ksize1x1 ,stride,filter_out_1) x2_conv_5x5_1x1=conv_2d(x, ksize1x1 ,stride,filter_out_1) x2_conv_7x7_1x1=conv_2d(x, ksize1x1 ,stride,filter_out_1) cancat1=tf.cancat() cancat1=cancat1+shortcut shortcut =cancat1 cancat1= Relu6(cancat1) with tf.variable_scope('block_B'): x1_conv_1x1=conv_2d(cancat1, ksize1x1 ,stride,filter_out) x1_conv_3x3=conv_2d(cancat1, ksize3x3 ,stride,filter_out) x1_conv_5x5=conv_2d(cancat1, ksize5x5 ,stride,filter_out) x1_conv_7x7=conv_2d(cancat1, ksize7x7 ,stride,filter_out) x2_conv_1x1=Max_pooling(x1_conv_1x1, pool_size=[3,3], stride=1, padding='SAME') x2_conv_3x3_1x1=conv_2d(x1_conv_3x3, ksize1x1 ,stride,filter_out_1) x2_conv_5x5_1x1=conv_2d(x1_conv_5x5, ksize1x1 ,stride,filter_out_1) x2_conv_7x7_1x1=conv_2d(x1_conv_7x7, ksize1x1 ,stride,filter_out_1) cancat2=tf.cancat() cancat2=cancat2+shortcut) cancat2= Relu6(cancat2) return cancat2