1. 程式人生 > >py-fater-rcnn中config.py引數的調整

py-fater-rcnn中config.py引數的調整


faster中的兩次的負樣本都參與位置回歸,不參與分類

# 最短邊Scale成600
__C.TRAIN.SCALES = (600,)
 
# 最長邊最大為1000
__C.TRAIN.MAX_SIZE = 1000
 
# 一個minibatch包含兩張圖片,但是呼叫是yaml裡改成1,而且只能為1
__C.TRAIN.IMS_PER_BATCH = 2
 -------------------------------------------------------------------------
# 這一部分是PropsalTargetCreator.py中的引數


# 1. 從rpn經過NMS抑制後得到的2000個roi中選出128個sampel_rois,進行標準化處理
,最後的訓練 # 2. RoIs 和 gt_bboxes 的 IoU 大於 0.5 的,選擇一些(128*0.25=32) # **注意這裡如果不夠32個,比如滿足IOU大於0.5的只有20個則剩餘108都是負樣本是108個** # **但是還需要主要的是這裡的位置回歸只對roi與gt的IOU>0.5[可設定,這裡感覺和上面的選擇條件有些重復,不明所以]正樣本計算損失. 而且是隻對正樣本中的這個類別 4 個引數計算損失,負樣本不進行loss計算*** # 3. 選擇 RoIs 和 gt_bboxes 的 IoU 小於等於 0(或者 0.1)的選擇一些(比如 128-正樣本 個)作為負樣本 # Minibatch大小,即ROI的數量,如果你的目標比較多的話,可以適當設定的大一些 __C.TRAIN.BATCH_SIZE = 128 # minibatch中前景樣本所佔的比例,負樣本只在分類時參與 __C.TRAIN.FG_FRACTION = 0.25 # Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH) # 與前景的overlap大於等於0.5認為該ROI為前景樣本,如果你的anchor經過修改,變得很多, # 比如小汽車長都在0-128個畫素,然後你的anchor的長寬從4 14 24 34 ...128 這裡的IOU可以比較大,這樣有利於快速收斂和精度的提高 __C.TRAIN.FG_THRESH = 0.5 # 與前景的overlap在0.1-0.5認為該ROI為背景樣本 __C.TRAIN.BG_THRESH_HI = 0.5 __C.TRAIN.BG_THRESH_LO = 0.1 # Use horizontally-flipped images during training? # 水平翻轉影象,增加資料量 __C.TRAIN.USE_FLIPPED = True # Train bounding-box regressors # 訓練bb迴歸器 __C.TRAIN.BBOX_REG = True # Overlap required between a ROI and ground-truth box in order for that ROI to # be used as a bounding-box regression training example # BBOX閾值,只有ROI與gt的重疊度大於閾值,這樣的ROI才能用作bb迴歸的訓練樣本 # 如果你上面的.FG_THRESH做過調整,這裡最好做相應的調整 __C.TRAIN.BBOX_THRESH = 0.5 # Iterations between snapshots # 每迭代1000次產生一次snapshot __C.TRAIN.SNAPSHOT_ITERS = 10000 # 為產生的snapshot檔名稱新增一個可選的infix. solver.prototxt指定了snapshot名稱的字首 __C.TRAIN.SNAPSHOT_INFIX = '' # 在roi_data_layer.layer使用預取執行緒,作者認為不太有效,因此設為False __C.TRAIN.USE_PREFETCH = False # Normalize the targets (subtract empirical mean, divide by empirical stddev) # 歸一化目標BBOX_NORMALIZE_TARGETS,減去經驗均值,除以標準差 __C.TRAIN.BBOX_NORMALIZE_TARGETS = True # Deprecated (inside weights) # 棄用 __C.TRAIN.BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0) # Normalize the targets using "precomputed" (or made up) means and stdevs # (BBOX_NORMALIZE_TARGETS must also be True) # 在BBOX_NORMALIZE_TARGETS為True時,歸一化targets,使用經驗均值和方差 __C.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED = False __C.TRAIN.BBOX_NORMALIZE_MEANS = (0.0, 0.0, 0.0, 0.0) __C.TRAIN.BBOX_NORMALIZE_STDS = (0.1, 0.1, 0.2, 0.2) -------------------------------------------------------------------------- # Train using these proposals # 使用'selective_search'的proposal訓練!注意該檔案來自fast rcnn,下文提到RPN __C.TRAIN.PROPOSAL_METHOD = 'selective_search' # Make minibatches from images that have similar aspect ratios (i.e. both # tall and thin or both short and wide) in order to avoid wasting computation # on zero-padding. # minibatch的兩個圖片應該有相似的寬高比,以避免冗餘的zero-padding計算 __C.TRAIN.ASPECT_GROUPING = True # Use RPN to detect objects # 使用RPN檢測目標 __C.TRAIN.HAS_RPN = False # IOU >= thresh: positive example # RPN的正樣本閾值 __C.TRAIN.RPN_POSITIVE_OVERLAP = 0.7 # IOU < thresh: negative example # RPN的負樣本閾值 __C.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3 # If an anchor statisfied by positive and negative conditions set to negative # 如果一個anchor同時滿足正負樣本條件,設為負樣本(應該用不到) __C.TRAIN.RPN_CLOBBER_POSITIVES = False # Max number of foreground examples # 前景樣本的比例 __C.TRAIN.RPN_FG_FRACTION = 0.5 # Total number of examples # batch size大小 __C.TRAIN.RPN_BATCHSIZE = 256 # NMS threshold used on RPN proposals # 非極大值抑制的閾值 __C.TRAIN.RPN_NMS_THRESH = 0.7 # Number of top scoring boxes to keep before apply NMS to RPN proposals # 在對RPN proposal使用NMS前,要保留的top scores的box數量 __C.TRAIN.RPN_PRE_NMS_TOP_N = 12000 # Number of top scoring boxes to keep after applying NMS to RPN proposals # 在對RPN proposal使用NMS後,要保留的top scores的box數量 __C.TRAIN.RPN_POST_NMS_TOP_N = 2000 # Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale) # proposal的高和寬都應該大於RPN_MIN_SIZE,否則,對映到conv5上不足一個畫素點 __C.TRAIN.RPN_MIN_SIZE = 16 # Deprecated (outside weights) # 棄用 __C.TRAIN.RPN_BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0) # Give the positive RPN examples weight of p * 1 / {num positives} # 給定正RPN樣本的權重 # and give negatives a weight of (1 - p) # 給定負RPN樣本的權重 # Set to -1.0 to use uniform example weighting # 這裡正負樣本使用相同權重 __C.TRAIN.RPN_POSITIVE_WEIGHT = -1.0   # # Testing options # 測試選項 #   __C.TEST = edict()   # Scales to use during testing (can list multiple scales) # Each scale is the pixel size of an image's shortest side __C.TEST.SCALES = (600,)   # Max pixel size of the longest side of a scaled input image __C.TEST.MAX_SIZE = 1000   # Overlap threshold used for non-maximum suppression (suppress boxes with # IoU >= this threshold) # 測試時非極大值抑制的閾值 __C.TEST.NMS = 0.3   # Experimental: treat the (K+1) units in the cls_score layer as linear # predictors (trained, eg, with one-vs-rest SVMs). # 分類不再用SVM,設定為False __C.TEST.SVM = False   # Test using bounding-box regressors # 使用bb迴歸 __C.TEST.BBOX_REG = True   # Propose boxes # 不使用RPN生成proposal __C.TEST.HAS_RPN = False   # Test using these proposals # 使用selective_search生成proposal __C.TEST.PROPOSAL_METHOD = 'selective_search'   ## NMS threshold used on RPN proposals #  RPN proposal的NMS閾值 __C.TEST.RPN_NMS_THRESH = 0.7 ## Number of top scoring boxes to keep before apply NMS to RPN proposals __C.TEST.RPN_PRE_NMS_TOP_N = 6000 ## Number of top scoring boxes to keep after applying NMS to RPN proposals __C.TEST.RPN_POST_NMS_TOP_N = 300 # Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale) __C.TEST.RPN_MIN_SIZE = 16   # # MISC #   # The mapping from image coordinates to feature map coordinates might cause # 從原圖到feature map的座標對映,可能會造成在原圖上不同的box到了feature map座標系上變得相同了 # some boxes that are distinct in image space to become identical in feature # coordinates. If DEDUP_BOXES > 0, then DEDUP_BOXES is used as the scale factor # for identifying duplicate boxes. # 1/16 is correct for {Alex,Caffe}Net, VGG_CNN_M_1024, and VGG16 # 縮放因子 __C.DEDUP_BOXES = 1./16.   # Pixel mean values (BGR order) as a (1, 1, 3) array # We use the same pixel mean for all networks even though it's not exactly what # they were trained with # 所有network所用的畫素均值設為相同 __C.PIXEL_MEANS = np.array([[[102.9801, 115.9465, 122.7717]]])   # For reproducibility __C.RNG_SEED = 3   # A small number that's used many times # 極小的數 __C.EPS = 1e-14   # Root directory of project # 專案根路徑 __C.ROOT_DIR = osp.abspath(osp.join(osp.dirname(__file__), '..', '..'))   # Data directory # 資料路徑 __C.DATA_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'data'))   # Model directory # 模型路徑 __C.MODELS_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'models', 'pascal_voc'))   # Name (or path to) the matlab executable # matlab executable __C.MATLAB = 'matlab'   # Place outputs under an experiments directory # 輸出在experiments路徑下 __C.EXP_DIR = 'default'   # Use GPU implementation of non-maximum suppression # GPU實施非極大值抑制 __C.USE_GPU_NMS = True   # Default GPU device id # 預設GPU id __C.GPU_ID = 0   def get_output_dir(imdb, net=None):     #返回輸出路徑,在experiments路徑下     """Return the directory where experimental artifacts are placed.     If the directory does not exist, it is created.       A canonical標準 path is built using the name from an imdb and a network     (if not None).     """     outdir = osp.abspath(osp.join(__C.ROOT_DIR, 'output', __C.EXP_DIR, imdb.name))     if net is not None:         outdir = osp.join(outdir, net.name)     if not os.path.exists(outdir):         os.makedirs(outdir)     return outdir   def _merge_a_into_b(a, b):     #兩個配置檔案融合     """Merge config dictionary a into config dictionary b, clobbering the     options in b whenever they are also specified in a.     """     if type(a) is not edict:         return       for k, v in a.iteritems():         # a must specify keys that are in b         if not b.has_key(k):             raise KeyError('{} is not a valid config key'.format(k))           # the types must match, too         old_type = type(b[k])         if old_type is not type(v):             if isinstance(b[k], np.ndarray):                 v = np.array(v, dtype=b[k].dtype)             else:                 raise ValueError(('Type mismatch ({} vs. {}) '                                 'for config key: {}').format(type(b[k]),                                                             type(v), k))           # recursively merge dicts         if type(v) is edict:             try:                 _merge_a_into_b(a[k], b[k])             except:                 print('Error under config key: {}'.format(k))                 raise         #用配置a更新配置b的對應項         else:             b[k] = v   def cfg_from_file(filename):     """Load a config file and merge it into the default options."""     # 匯入配置檔案並與預設選項融合     import yaml     with open(filename, 'r') as f:         yaml_cfg = edict(yaml.load(f))       _merge_a_into_b(yaml_cfg, __C)   def cfg_from_list(cfg_list):     # 命令列設定config     """Set config keys via list (e.g., from command line)."""     from ast import literal_eval     assert len(cfg_list) % 2 == 0     for k, v in zip(cfg_list[0::2], cfg_list[1::2]):         key_list = k.split('.')         d = __C         for subkey in key_list[:-1]:             assert d.has_key(subkey)             d = d[subkey]         subkey = key_list[-1]         assert d.has_key(subkey)         try:             value = literal_eval(v)         except:             # handle the case when v is a string literal             value = v         assert type(value) == type(d[subkey]), \             'type {} does not match original type {}'.format(             type(value), type(d[subkey]))         d[subkey] = value
  • AnchorTargetCreator裡的引數

  • 將 20000 多個候選的 anchor 選出 256 個 anchor 進行分類和迴歸位置。選擇過程如下:

  • 對於每一個 ground truth bounding box選擇和它重疊度(IoU)最高的一個 anchor 作為正樣本

  • 對於剩下的 anchor,從中選擇和任意一個重疊度超過 0.7 的 anchor,作為正樣本,正樣本的數目不超過 128 個。

  • 隨機選擇和重疊度小於 0.3 的 anchor 作為負樣。負樣本和正樣本的總數為 256。

  • 對於每個 anchor, gt_label 要麼為 1(前景),要麼為 0(背景),而 gt_loc 則是由 4 個位置引數 (tx,ty,tw,th) 組成,這樣比直接回歸座標更好。
  • 計算分類損失用的是交叉熵損失,而計算迴歸損失用的是 Smooth_l1_loss. 在計算迴歸損失的時候,只計算正樣本(128個前景)的損失,不計算負樣本的位置損失。