dataset for person re-id
阿新 • • 發佈:2019-01-24
GRID: http://personal.ie.cuhk.edu.hk/~ccloy/downloads_qmul_underground_reid.html
person re-id:
http://www.ssig.dcc.ufmg.br/reid-results/
CUHK01: 2個視角,校園環境,971人,一共1552張影象。view A 主要捕獲人的正面和背面,view B捕獲側面。每個人有4張影象,每個視角下有2張影象。
CUHK02 取自5個不同的戶外camera對,共1816人。5個camera對分別有971,306,107,193,239人,大小160*60. 每個人在每個攝像機下的不同時間內取兩張圖片。大多數人都有負重(揹包,手提包,皮帶包,行李)。
CUHK03
VIPeR:632人,2個戶外攝像頭,有多種姿態,視角和光照變化。每個人在每個攝像機下有一張影象,尺度為128*48。提供的角度0度(front),45度,90度(right),135度,180(back)
iLIDS-Vid:取自監控航空接站大廳,從2個不相交攝像機建立該資料集。隨機為300個人取樣了600個視訊,每人有來自兩個視覺的一對視訊。每個視訊有23~192幀,平均73幀。相似的衣服、光照和視覺改變,複雜的背景和嚴重的遮擋,很具挑戰性。
iLIDS:取樣了119個人479張影象。size:128*64。每個人平均有4個張影象。有大的照明 改變和遮擋。
PRID 2011
3DPeS:收集了8個不相交的戶外攝像機,監控校園的不同地方。不同於iLIDS和PRID,它提供了完整的監控視訊序列:提供了6個視訊對集合,15 frame/s,解析度704*576。一共193個行人。
Shinpuhkan:包含22000多張影象。只包含24個行人,他們從16個攝像視覺捕獲的,提供了豐富的類內變化資訊。
GRID: The QMUL underGround Re-IDentification (GRID) dataset contains 250 pedestrian image pairs. Each pair contains two images of the same individual seen from different camera views. All images are captured from 8 disjoint camera views installed in a busy underground station. The figures beside show a snapshot of each of the camera views of the station and sample images in the dataset. The dataset is challenging due to variations of pose, colours, lighting changes; as well as poor image quality caused by low spatial resolution.
CAVIAR4REID dataset
The original dataset, CAVIAR, consists of several sequences filmed in the entrance lobby of the INRIA Labs and in a shopping centre in Lisbon. We selected the shopping centre scenario, because it is a less controlled recording and also the cameras are better located (in INRIA Labs scenario, the camera is located overhead. Not a typical scenario for re-identification.). Shopping centre dataset contains 26 sequences recorded from two different points of view at the resolution of 384 X 288 pixels. It includes people walking alone, meeting with others, window shopping, entering and exiting shops. The ground truth has been used to extract the bounding box of each pedestrian. Then we manual select a total of 72 pedestrians: 50 of them with both the camera views and the remaining 22 with one camera view. For each pedestrian, we accurately selected a set of images for each camera view (where available) in order to maximize the variance with respect to resolution changes, light conditions, occlusions, and pose changes so as to make challenging the re-identification task.