使用faster rcnn進行目標檢測
本文轉載自:
http://blog.csdn.net/Gavin__Zhou/article/details/52052915原理
實驗
我使用的程式碼是Python版本的Faster
Rcnn
,官方也有Matlab
版本的,連結如下:
環境配置
按照官方的README
進行配置就好,不過在這之前大家還是看下硬體要求吧
For training smaller networks (ZF, VGG_CNN_M_1024) a good GPU (e.g., Titan, K20, K40, …) with at least 3G of memory suffices
For training Fast R-CNN with VGG16, you’ll need a K40 (~11G of memory)
For training the end-to-end version of Faster R-CNN with VGG16, 3G of GPU memory is sufficient (using CUDNN)
我的是環境是Ubuntu 14.04 + Titan X(12GB) + cuda 7.0 + cudnn V3
1 Caffe
環境配置
Caffe環境需要python layer的支援,在你的Caffe的Makefile.config
中去掉以下的註釋
- WITH_PYTHON_LAYER := 1
- USE_CUDNN := 1
2 安裝python庫依賴
cython
easydict
pip install cython
pip install python-opencv
pip install easydict
- 1
- 2
- 3
- 1
- 2
- 3
3 克隆py-faster-rcnn
原始碼
git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git
- 1
- 1
4 編譯cython
模組
cd $FRCN_ROOT/lib
make
- 1
- 2
- 1
- 2
5 編譯Caffe
和pycaffe
cd $FRCN_ROOT/caffe-fast-rcnn
make -j8 && make pycaffe
- 1
- 2
- 1
- 2
-j8
資料集
參考VOC2007
的資料集格式,主要包括三個部分:
JPEGImages
Annotations
ImageSets/Main
JPEGImages
—> 存放你用來訓練的原始影象
Annotations
—> 存放原始影象中的Object的座標資訊,XML格式
ImageSets/Main
—> 指定用來train,trainval,val和test的圖片的編號
這部分非常重要,資料集做不好直接導致程式碼出現異常,無法執行,或者出現奇怪的錯誤,我也是掉進了很多坑,爬上來之後才寫的這篇部落格,希望大家不要趟我趟過的渾水!每一個部分我都會細說的!
JPEGImages
這個沒什麼,直接把你的圖片放入就可以了,但是有三點注意:
編號要以6為數字命名,例如000034.jpg
圖片要是JPEG/JPG格式的,PNG之類的需要自己轉換下
圖片的長寬比(width/height)要在0.462-6.828之間,就是太過
瘦長
的圖片不要
0.462-6.828是我自己實驗得出來的,就我的資料集而言是這個比例,總之長寬比太大或者太小的,你要注意將其剔除,否則可能會出現下面我實驗時候出的錯:
Traceback (most recent call last):
File “/usr/lib/python2.7/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/usr/lib/python2.7/multiprocessing/process.py”, line 114, in run
self._target(*self._args, **self._kwargs)
File “./tools/train_faster_rcnn_alt_opt.py”, line 130, in train_rpn
max_iters=max_iters)
File “/home/work-station/zx/py-faster-rcnn/tools/../lib/fast_rcnn/train.py”, line 160, in train_net
model_paths = sw.train_model(max_iters)
File “/home/work-station/zx/py-faster-rcnn/tools/../lib/fast_rcnn/train.py”, line 101, in train_model
self.solver.step(1)
File “/home/work-station/zx/py-faster-rcnn/tools/../lib/rpn/anchor_target_layer.py”, line 137, in forward
gt_argmax_overlaps = overlaps.argmax(axis=0)
ValueError: attempt to get argmax of an empty sequence
Google給出的原因是 Because the ratio of images width and heights is too small or large,這個非常重要
Annotations
faster rcnn
訓練需要影象的bounding
box
資訊作為監督(ground truth),所以你需要將你的所有可能的object使用框標註,並寫上座標,最終是一個XML格式的檔案,一個訓練圖片對應Annotations下的一個同名的XML檔案
參考官方VOC的Annotations的格式:
<annotation>
<folder>VOC2007</folder> #資料集資料夾
<filename>000105.jpg</filename> #圖片的name
<source> #註釋資訊,無所謂有無
<database>The VOC2007 Database</database>
<annotation>PASCAL VOC2007</annotation>

<flickrid>321862192</flickrid>
</source>
<owner> #註釋資訊,無所謂有無
<flickrid>Eric T. Johnson</flickrid>
<name>?</name>
</owner>
<size> #圖片大小
<width>500</width>
<height>333</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object> #多少個框就有多少個object標籤
<name>boat</name> #bounding box中的object的class name
<pose>Frontal</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>22</xmin> #框的座標
<ymin>1</ymin>
<xmax>320</xmax>
<ymax>314</ymax>
</bndbox>
</object>
<object>
<name>person</name>
<pose>Frontal</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>202</xmin>
<ymin>71</ymin>
<xmax>295</xmax>
<ymax>215</ymax>
</bndbox>
</object>
<object>
<name>person</name>
<pose>Frontal</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>170</xmin>
<ymin>107</ymin>
<xmax>239</xmax>
<ymax>206</ymax>
</bndbox>
</object>
</annotation>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
這裡有一個非常好用的工具VOC框圖工具,可以自動幫你生成需要的XML格式,實際中發現格式基本無誤,只有小的地方需要改動下,大家對比下就知道怎麼改了,我是在Linux下藉助sed
修改的,這個不難
Imagesets/Main
因為VOC的資料集可以做很多的CV任務,比如Object detection, Semantic segementation, Edge detection等,所以Imageset下有幾個子資料夾(Layout, Main, Segementation),我們只要修改下Main下的檔案就可以了(train.txt
, trainval.txt
, val.txt
, test.txt
),裡面寫上你想要進行任務的圖片的編號
將上述你的資料集放在py-faster-rcnn/data/VOCdevkit2007/VOC2007
下面,替換原始VOC2007的JPEGIMages
,Imagesets
,Annotations
程式碼修改
工程目錄介紹
caffe-fast-rcnn —> caffe框架
data —> 存放資料,以及讀取檔案的cache
experiments —>存放配置檔案以及執行的log檔案,配置檔案
lib —> python介面
models —> 三種模型, ZF(S)/VGG1024(M)/VGG16(L)
output —> 輸出的model存放的位置,不訓練此資料夾沒有
tools —> 訓練和測試的python檔案
修改原始檔
faster rcnn
有兩種各種訓練方式:
Alternative training(alt-opt)
Approximate joint training(end-to-end)
推薦使用第二種,因為第二種使用的視訊記憶體更小,而且訓練會更快,同時準確率差不多,兩種方式需要修改的程式碼是不一樣的,同時faster rcnn提供了三種訓練模型,小型的ZF
model,中型的VGG_CNN_M_1024
和大型的VGG16
,論文中說VGG16效果比其他兩個好,但是同時佔用更大的GPU視訊記憶體(~11GB)
我使用的是VGG model + alternative training,需要檢測的類別只有一類,加上背景所以總共是兩類(background + captcha)
1 py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/stage1_fast_rcnn_train.pt
layer {
name: 'data'
type: 'Python'
top: 'data'
top: 'rois'
top: 'labels'
top: 'bbox_targets'
top: 'bbox_inside_weights'
top: 'bbox_outside_weights'
python_param {
module: 'roi_data_layer.layer'
layer: 'RoIDataLayer'
param_str: "'num_classes': 2" #按訓練集類別改,該值為類別數+1
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
layer {
name: "cls_score"
type: "InnerProduct"
bottom: "fc7"
top: "cls_score"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 2 #按訓練集類別改,該值為類別數+1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
layer {
name: "bbox_pred"
type: "InnerProduct"
bottom: "fc7"
top: "bbox_pred"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 8 #按訓練集類別改,該值為(類別數+1)*4,四個頂點座標
weight_filler {
type: "gaussian"
std: 0.001
}
bias_filler {
type: "constant"
value: 0
}
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
2 py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/stage1_rpn_train.pt
layer {
name: 'input-data'
type: 'Python'
top: 'data'
top: 'im_info'
top: 'gt_boxes'
python_param {
module: 'roi_data_layer.layer'
layer: 'RoIDataLayer'
param_str: "'num_classes': 2" #按訓練集類別改,該值為類別數+1
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
3 py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/stage2_fast_rcnn_train.pt
layer {
name: 'data'
type: 'Python'
top: 'data'
top: 'rois'
top: 'labels'
top: 'bbox_targets'
top: 'bbox_inside_weights'
top: 'bbox_outside_weights'
python_param {
module: 'roi_data_layer.layer'
layer: 'RoIDataLayer'
param_str: "'num_classes': 2" #按訓練集類別改,該值為類別數+1
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
layer {
name: "cls_score"
type: "InnerProduct"
bottom: "fc7"
top: "cls_score"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 2 #按訓練集類別改,該值為類別數+1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
layer {
name: "bbox_pred"
type: "InnerProduct"
bottom: "fc7"
top: "bbox_pred"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 8 #按訓練集類別改,該值為(類別數+1)*4,四個頂點座標
weight_filler {
type: "gaussian"
std: 0.001
}
bias_filler {
type: "constant"
value: 0
}
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
4 py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/stage2_rpn_train.pt
layer {
name: 'input-data'
type: 'Python'
top: 'data'
top: 'im_info'
top: 'gt_boxes'
python_param {
module: 'roi_data_layer.layer'
layer: 'RoIDataLayer'
param_str: "'num_classes': 2" #按訓練集類別改,該值為類別數+1
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
5 py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/faster_rcnn_test.pt
layer {
name: "cls_score"
type: "InnerProduct"
bottom: "fc7"
top: "cls_score"
inner_product_param {
num_output: 2 #按訓練集類別改,該值為類別數+1
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
6 py-faster-rcnn/lib/datasets/pascal_voc.py
class pascal_voc(imdb):
def __init__(self, image_set, year, devkit_path=None):
imdb.__init__(self, 'voc_' + year + '_' + image_set)
self._year = year
self._image_set = image_set
self._devkit_path = self._get_default_path() if devkit_path is None \
else devkit_path
self._data_path = os.path.join(self._devkit_path, 'VOC' + self._year)
self._classes = ('__background__', # always index 0
captcha' # 有幾個類別此處就寫幾個,我是兩個
)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
line 212
cls = self._class_to_ind[obj.find('name').text.lower().strip()]
- 1
- 1
如果你的標籤含有大寫字母,可能會出現KeyError的錯誤,所以建議全部使用小寫字母
7 py-faster-rcnn/lib/datasets/imdb.py
將append_flipped_images函式改為如下形式:
def append_flipped_images(self):
num_images = self.num_images
widths = [PIL.Image.open(self.image_path_at(i)).size[0]
for i in xrange(num_images)]
for i in xrange(num_images):
boxes = self.roidb[i]['boxes'].copy()
oldx1 = boxes[:, 0].copy()
oldx2 = boxes[:, 2].copy()
boxes[:, 0] = widths[i] - oldx2 - 1
print boxes[:, 0]
boxes[:, 2] = widths[i] - oldx1 - 1
print boxes[:, 0]
assert (boxes[:, 2] >= boxes[:, 0]).all()
entry = {'boxes' : boxes,
'gt_overlaps' : self.roidb[i]['gt_overlaps'],
'gt_classes' : self.roidb[i]['gt_classes'],
'flipped' : True}
self.roidb.append(entry)
self._image_index = self._image_index * 2
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
到此程式碼修改就搞定了
訓練
訓練前還需要注意幾個地方
1 cache問題
假如你之前訓練了官方的VOC2007的資料集或其他的資料集,是會產生cache的問題的,建議在重新訓練新的資料之前將其刪除
(1) py-faster-rcnn/output
(2) py-faster-rcnn/data/cache
2 訓練引數
py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/stage_fast_rcnn_solver*.pt
base_lr: 0.001
lr_policy: 'step'
step_size: 30000
display: 20
....
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3